23:10:55 Started by timer 23:10:55 Running as SYSTEM 23:10:55 [EnvInject] - Loading node environment variables. 23:10:55 Building remotely on prd-ubuntu1804-docker-8c-8g-25416 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 23:10:55 [ssh-agent] Looking for ssh-agent implementation... 23:10:55 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 23:10:55 $ ssh-agent 23:10:55 SSH_AUTH_SOCK=/tmp/ssh-2vQN8zI4cs0u/agent.2054 23:10:55 SSH_AGENT_PID=2056 23:10:55 [ssh-agent] Started. 23:10:55 Running ssh-add (command line suppressed) 23:10:55 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_366184376703316826.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_366184376703316826.key) 23:10:55 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 23:10:55 The recommended git tool is: NONE 23:10:56 using credential onap-jenkins-ssh 23:10:56 Wiping out workspace first. 23:10:56 Cloning the remote Git repository 23:10:56 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 23:10:57 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 23:10:57 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 23:10:57 > git --version # timeout=10 23:10:57 > git --version # 'git version 2.17.1' 23:10:57 using GIT_SSH to set credentials Gerrit user 23:10:57 Verifying host key using manually-configured host key entries 23:10:57 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 23:10:57 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 23:10:57 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 23:10:57 Avoid second fetch 23:10:57 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 23:10:57 Checking out Revision deb0e121d5b4b9bd68334c2565aae21d8eed0d21 (refs/remotes/origin/master) 23:10:57 > git config core.sparsecheckout # timeout=10 23:10:58 > git checkout -f deb0e121d5b4b9bd68334c2565aae21d8eed0d21 # timeout=30 23:10:58 Commit message: "Improve stability in integration tests" 23:10:58 > git rev-list --no-walk deb0e121d5b4b9bd68334c2565aae21d8eed0d21 # timeout=10 23:10:58 provisioning config files... 23:10:58 copy managed file [npmrc] to file:/home/jenkins/.npmrc 23:10:58 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 23:10:58 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7894626512866828076.sh 23:10:58 ---> python-tools-install.sh 23:10:58 Setup pyenv: 23:10:58 * system (set by /opt/pyenv/version) 23:10:58 * 3.8.13 (set by /opt/pyenv/version) 23:10:58 * 3.9.13 (set by /opt/pyenv/version) 23:10:58 * 3.10.6 (set by /opt/pyenv/version) 23:11:02 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-TV3n 23:11:02 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 23:11:05 lf-activate-venv(): INFO: Installing: lftools 23:11:38 lf-activate-venv(): INFO: Adding /tmp/venv-TV3n/bin to PATH 23:11:38 Generating Requirements File 23:12:05 Python 3.10.6 23:12:05 pip 24.0 from /tmp/venv-TV3n/lib/python3.10/site-packages/pip (python 3.10) 23:12:06 appdirs==1.4.4 23:12:06 argcomplete==3.3.0 23:12:06 aspy.yaml==1.3.0 23:12:06 attrs==23.2.0 23:12:06 autopage==0.5.2 23:12:06 beautifulsoup4==4.12.3 23:12:06 boto3==1.34.90 23:12:06 botocore==1.34.90 23:12:06 bs4==0.0.2 23:12:06 cachetools==5.3.3 23:12:06 certifi==2024.2.2 23:12:06 cffi==1.16.0 23:12:06 cfgv==3.4.0 23:12:06 chardet==5.2.0 23:12:06 charset-normalizer==3.3.2 23:12:06 click==8.1.7 23:12:06 cliff==4.6.0 23:12:06 cmd2==2.4.3 23:12:06 cryptography==3.3.2 23:12:06 debtcollector==3.0.0 23:12:06 decorator==5.1.1 23:12:06 defusedxml==0.7.1 23:12:06 Deprecated==1.2.14 23:12:06 distlib==0.3.8 23:12:06 dnspython==2.6.1 23:12:06 docker==4.2.2 23:12:06 dogpile.cache==1.3.2 23:12:06 email_validator==2.1.1 23:12:06 filelock==3.13.4 23:12:06 future==1.0.0 23:12:06 gitdb==4.0.11 23:12:06 GitPython==3.1.43 23:12:06 google-auth==2.29.0 23:12:06 httplib2==0.22.0 23:12:06 identify==2.5.36 23:12:06 idna==3.7 23:12:06 importlib-resources==1.5.0 23:12:06 iso8601==2.1.0 23:12:06 Jinja2==3.1.3 23:12:06 jmespath==1.0.1 23:12:06 jsonpatch==1.33 23:12:06 jsonpointer==2.4 23:12:06 jsonschema==4.21.1 23:12:06 jsonschema-specifications==2023.12.1 23:12:06 keystoneauth1==5.6.0 23:12:06 kubernetes==29.0.0 23:12:06 lftools==0.37.10 23:12:06 lxml==5.2.1 23:12:06 MarkupSafe==2.1.5 23:12:06 msgpack==1.0.8 23:12:06 multi_key_dict==2.0.3 23:12:06 munch==4.0.0 23:12:06 netaddr==1.2.1 23:12:06 netifaces==0.11.0 23:12:06 niet==1.4.2 23:12:06 nodeenv==1.8.0 23:12:06 oauth2client==4.1.3 23:12:06 oauthlib==3.2.2 23:12:06 openstacksdk==3.1.0 23:12:06 os-client-config==2.1.0 23:12:06 os-service-types==1.7.0 23:12:06 osc-lib==3.0.1 23:12:06 oslo.config==9.4.0 23:12:06 oslo.context==5.5.0 23:12:06 oslo.i18n==6.3.0 23:12:06 oslo.log==5.5.1 23:12:06 oslo.serialization==5.4.0 23:12:06 oslo.utils==7.1.0 23:12:06 packaging==24.0 23:12:06 pbr==6.0.0 23:12:06 platformdirs==4.2.1 23:12:06 prettytable==3.10.0 23:12:06 pyasn1==0.6.0 23:12:06 pyasn1_modules==0.4.0 23:12:06 pycparser==2.22 23:12:06 pygerrit2==2.0.15 23:12:06 PyGithub==2.3.0 23:12:06 pyinotify==0.9.6 23:12:06 PyJWT==2.8.0 23:12:06 PyNaCl==1.5.0 23:12:06 pyparsing==2.4.7 23:12:06 pyperclip==1.8.2 23:12:06 pyrsistent==0.20.0 23:12:06 python-cinderclient==9.5.0 23:12:06 python-dateutil==2.9.0.post0 23:12:06 python-heatclient==3.5.0 23:12:06 python-jenkins==1.8.2 23:12:06 python-keystoneclient==5.4.0 23:12:06 python-magnumclient==4.4.0 23:12:06 python-novaclient==18.6.0 23:12:06 python-openstackclient==6.6.0 23:12:06 python-swiftclient==4.5.0 23:12:06 PyYAML==6.0.1 23:12:06 referencing==0.34.0 23:12:06 requests==2.31.0 23:12:06 requests-oauthlib==2.0.0 23:12:06 requestsexceptions==1.4.0 23:12:06 rfc3986==2.0.0 23:12:06 rpds-py==0.18.0 23:12:06 rsa==4.9 23:12:06 ruamel.yaml==0.18.6 23:12:06 ruamel.yaml.clib==0.2.8 23:12:06 s3transfer==0.10.1 23:12:06 simplejson==3.19.2 23:12:06 six==1.16.0 23:12:06 smmap==5.0.1 23:12:06 soupsieve==2.5 23:12:06 stevedore==5.2.0 23:12:06 tabulate==0.9.0 23:12:06 toml==0.10.2 23:12:06 tomlkit==0.12.4 23:12:06 tqdm==4.66.2 23:12:06 typing_extensions==4.11.0 23:12:06 tzdata==2024.1 23:12:06 urllib3==1.26.18 23:12:06 virtualenv==20.26.0 23:12:06 wcwidth==0.2.13 23:12:06 websocket-client==1.8.0 23:12:06 wrapt==1.16.0 23:12:06 xdg==6.0.0 23:12:06 xmltodict==0.13.0 23:12:06 yq==3.4.1 23:12:06 [EnvInject] - Injecting environment variables from a build step. 23:12:06 [EnvInject] - Injecting as environment variables the properties content 23:12:06 SET_JDK_VERSION=openjdk17 23:12:06 GIT_URL="git://cloud.onap.org/mirror" 23:12:06 23:12:06 [EnvInject] - Variables injected successfully. 23:12:06 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins3305946228350909483.sh 23:12:06 ---> update-java-alternatives.sh 23:12:06 ---> Updating Java version 23:12:06 ---> Ubuntu/Debian system detected 23:12:06 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 23:12:06 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 23:12:06 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 23:12:06 openjdk version "17.0.4" 2022-07-19 23:12:06 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 23:12:06 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 23:12:06 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 23:12:06 [EnvInject] - Injecting environment variables from a build step. 23:12:06 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 23:12:06 [EnvInject] - Variables injected successfully. 23:12:06 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins812043304045634478.sh 23:12:06 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 23:12:06 + set +u 23:12:06 + save_set 23:12:06 + RUN_CSIT_SAVE_SET=ehxB 23:12:06 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 23:12:06 + '[' 1 -eq 0 ']' 23:12:06 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:06 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:06 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:06 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:06 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:06 + export ROBOT_VARIABLES= 23:12:06 + ROBOT_VARIABLES= 23:12:06 + export PROJECT=pap 23:12:06 + PROJECT=pap 23:12:06 + cd /w/workspace/policy-pap-master-project-csit-pap 23:12:06 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:06 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:06 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:06 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 23:12:06 + relax_set 23:12:06 + set +e 23:12:06 + set +o pipefail 23:12:06 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:06 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:06 +++ mktemp -d 23:12:06 ++ ROBOT_VENV=/tmp/tmp.u4d9zzrcNi 23:12:06 ++ echo ROBOT_VENV=/tmp/tmp.u4d9zzrcNi 23:12:06 +++ python3 --version 23:12:06 ++ echo 'Python version is: Python 3.6.9' 23:12:06 Python version is: Python 3.6.9 23:12:06 ++ python3 -m venv --clear /tmp/tmp.u4d9zzrcNi 23:12:08 ++ source /tmp/tmp.u4d9zzrcNi/bin/activate 23:12:08 +++ deactivate nondestructive 23:12:08 +++ '[' -n '' ']' 23:12:08 +++ '[' -n '' ']' 23:12:08 +++ '[' -n /bin/bash -o -n '' ']' 23:12:08 +++ hash -r 23:12:08 +++ '[' -n '' ']' 23:12:08 +++ unset VIRTUAL_ENV 23:12:08 +++ '[' '!' nondestructive = nondestructive ']' 23:12:08 +++ VIRTUAL_ENV=/tmp/tmp.u4d9zzrcNi 23:12:08 +++ export VIRTUAL_ENV 23:12:08 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:08 +++ PATH=/tmp/tmp.u4d9zzrcNi/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:08 +++ export PATH 23:12:08 +++ '[' -n '' ']' 23:12:08 +++ '[' -z '' ']' 23:12:08 +++ _OLD_VIRTUAL_PS1= 23:12:08 +++ '[' 'x(tmp.u4d9zzrcNi) ' '!=' x ']' 23:12:08 +++ PS1='(tmp.u4d9zzrcNi) ' 23:12:08 +++ export PS1 23:12:08 +++ '[' -n /bin/bash -o -n '' ']' 23:12:08 +++ hash -r 23:12:08 ++ set -exu 23:12:08 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 23:12:11 ++ echo 'Installing Python Requirements' 23:12:11 Installing Python Requirements 23:12:11 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 23:12:29 ++ python3 -m pip -qq freeze 23:12:29 bcrypt==4.0.1 23:12:29 beautifulsoup4==4.12.3 23:12:29 bitarray==2.9.2 23:12:29 certifi==2024.2.2 23:12:29 cffi==1.15.1 23:12:29 charset-normalizer==2.0.12 23:12:29 cryptography==40.0.2 23:12:29 decorator==5.1.1 23:12:29 elasticsearch==7.17.9 23:12:29 elasticsearch-dsl==7.4.1 23:12:29 enum34==1.1.10 23:12:29 idna==3.7 23:12:29 importlib-resources==5.4.0 23:12:29 ipaddr==2.2.0 23:12:29 isodate==0.6.1 23:12:29 jmespath==0.10.0 23:12:29 jsonpatch==1.32 23:12:29 jsonpath-rw==1.4.0 23:12:29 jsonpointer==2.3 23:12:29 lxml==5.2.1 23:12:29 netaddr==0.8.0 23:12:29 netifaces==0.11.0 23:12:29 odltools==0.1.28 23:12:29 paramiko==3.4.0 23:12:29 pkg_resources==0.0.0 23:12:29 ply==3.11 23:12:29 pyang==2.6.0 23:12:29 pyangbind==0.8.1 23:12:29 pycparser==2.21 23:12:29 pyhocon==0.3.60 23:12:29 PyNaCl==1.5.0 23:12:29 pyparsing==3.1.2 23:12:29 python-dateutil==2.9.0.post0 23:12:29 regex==2023.8.8 23:12:29 requests==2.27.1 23:12:29 robotframework==6.1.1 23:12:29 robotframework-httplibrary==0.4.2 23:12:29 robotframework-pythonlibcore==3.0.0 23:12:29 robotframework-requests==0.9.4 23:12:29 robotframework-selenium2library==3.0.0 23:12:29 robotframework-seleniumlibrary==5.1.3 23:12:29 robotframework-sshlibrary==3.8.0 23:12:29 scapy==2.5.0 23:12:29 scp==0.14.5 23:12:29 selenium==3.141.0 23:12:29 six==1.16.0 23:12:29 soupsieve==2.3.2.post1 23:12:29 urllib3==1.26.18 23:12:29 waitress==2.0.0 23:12:29 WebOb==1.8.7 23:12:29 WebTest==3.0.0 23:12:29 zipp==3.6.0 23:12:29 ++ mkdir -p /tmp/tmp.u4d9zzrcNi/src/onap 23:12:29 ++ rm -rf /tmp/tmp.u4d9zzrcNi/src/onap/testsuite 23:12:29 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 23:12:34 ++ echo 'Installing python confluent-kafka library' 23:12:34 Installing python confluent-kafka library 23:12:34 ++ python3 -m pip install -qq confluent-kafka 23:12:36 ++ echo 'Uninstall docker-py and reinstall docker.' 23:12:36 Uninstall docker-py and reinstall docker. 23:12:36 ++ python3 -m pip uninstall -y -qq docker 23:12:36 ++ python3 -m pip install -U -qq docker 23:12:37 ++ python3 -m pip -qq freeze 23:12:38 bcrypt==4.0.1 23:12:38 beautifulsoup4==4.12.3 23:12:38 bitarray==2.9.2 23:12:38 certifi==2024.2.2 23:12:38 cffi==1.15.1 23:12:38 charset-normalizer==2.0.12 23:12:38 confluent-kafka==2.3.0 23:12:38 cryptography==40.0.2 23:12:38 decorator==5.1.1 23:12:38 deepdiff==5.7.0 23:12:38 dnspython==2.2.1 23:12:38 docker==5.0.3 23:12:38 elasticsearch==7.17.9 23:12:38 elasticsearch-dsl==7.4.1 23:12:38 enum34==1.1.10 23:12:38 future==1.0.0 23:12:38 idna==3.7 23:12:38 importlib-resources==5.4.0 23:12:38 ipaddr==2.2.0 23:12:38 isodate==0.6.1 23:12:38 Jinja2==3.0.3 23:12:38 jmespath==0.10.0 23:12:38 jsonpatch==1.32 23:12:38 jsonpath-rw==1.4.0 23:12:38 jsonpointer==2.3 23:12:38 kafka-python==2.0.2 23:12:38 lxml==5.2.1 23:12:38 MarkupSafe==2.0.1 23:12:38 more-itertools==5.0.0 23:12:38 netaddr==0.8.0 23:12:38 netifaces==0.11.0 23:12:38 odltools==0.1.28 23:12:38 ordered-set==4.0.2 23:12:38 paramiko==3.4.0 23:12:38 pbr==6.0.0 23:12:38 pkg_resources==0.0.0 23:12:38 ply==3.11 23:12:38 protobuf==3.19.6 23:12:38 pyang==2.6.0 23:12:38 pyangbind==0.8.1 23:12:38 pycparser==2.21 23:12:38 pyhocon==0.3.60 23:12:38 PyNaCl==1.5.0 23:12:38 pyparsing==3.1.2 23:12:38 python-dateutil==2.9.0.post0 23:12:38 PyYAML==6.0.1 23:12:38 regex==2023.8.8 23:12:38 requests==2.27.1 23:12:38 robotframework==6.1.1 23:12:38 robotframework-httplibrary==0.4.2 23:12:38 robotframework-onap==0.6.0.dev105 23:12:38 robotframework-pythonlibcore==3.0.0 23:12:38 robotframework-requests==0.9.4 23:12:38 robotframework-selenium2library==3.0.0 23:12:38 robotframework-seleniumlibrary==5.1.3 23:12:38 robotframework-sshlibrary==3.8.0 23:12:38 robotlibcore-temp==1.0.2 23:12:38 scapy==2.5.0 23:12:38 scp==0.14.5 23:12:38 selenium==3.141.0 23:12:38 six==1.16.0 23:12:38 soupsieve==2.3.2.post1 23:12:38 urllib3==1.26.18 23:12:38 waitress==2.0.0 23:12:38 WebOb==1.8.7 23:12:38 websocket-client==1.3.1 23:12:38 WebTest==3.0.0 23:12:38 zipp==3.6.0 23:12:38 ++ uname 23:12:38 ++ grep -q Linux 23:12:38 ++ sudo apt-get -y -qq install libxml2-utils 23:12:38 + load_set 23:12:38 + _setopts=ehuxB 23:12:38 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 23:12:38 ++ tr : ' ' 23:12:38 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:38 + set +o braceexpand 23:12:38 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:38 + set +o hashall 23:12:38 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:38 + set +o interactive-comments 23:12:38 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:38 + set +o nounset 23:12:38 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:38 + set +o xtrace 23:12:38 ++ echo ehuxB 23:12:38 ++ sed 's/./& /g' 23:12:38 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:38 + set +e 23:12:38 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:38 + set +h 23:12:38 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:38 + set +u 23:12:38 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:38 + set +x 23:12:38 + source_safely /tmp/tmp.u4d9zzrcNi/bin/activate 23:12:38 + '[' -z /tmp/tmp.u4d9zzrcNi/bin/activate ']' 23:12:38 + relax_set 23:12:38 + set +e 23:12:38 + set +o pipefail 23:12:38 + . /tmp/tmp.u4d9zzrcNi/bin/activate 23:12:38 ++ deactivate nondestructive 23:12:38 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 23:12:38 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:38 ++ export PATH 23:12:38 ++ unset _OLD_VIRTUAL_PATH 23:12:38 ++ '[' -n '' ']' 23:12:38 ++ '[' -n /bin/bash -o -n '' ']' 23:12:38 ++ hash -r 23:12:38 ++ '[' -n '' ']' 23:12:38 ++ unset VIRTUAL_ENV 23:12:38 ++ '[' '!' nondestructive = nondestructive ']' 23:12:38 ++ VIRTUAL_ENV=/tmp/tmp.u4d9zzrcNi 23:12:38 ++ export VIRTUAL_ENV 23:12:38 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:38 ++ PATH=/tmp/tmp.u4d9zzrcNi/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:38 ++ export PATH 23:12:38 ++ '[' -n '' ']' 23:12:38 ++ '[' -z '' ']' 23:12:38 ++ _OLD_VIRTUAL_PS1='(tmp.u4d9zzrcNi) ' 23:12:38 ++ '[' 'x(tmp.u4d9zzrcNi) ' '!=' x ']' 23:12:38 ++ PS1='(tmp.u4d9zzrcNi) (tmp.u4d9zzrcNi) ' 23:12:38 ++ export PS1 23:12:38 ++ '[' -n /bin/bash -o -n '' ']' 23:12:38 ++ hash -r 23:12:38 + load_set 23:12:38 + _setopts=hxB 23:12:38 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:12:38 ++ tr : ' ' 23:12:38 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:38 + set +o braceexpand 23:12:38 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:38 + set +o hashall 23:12:38 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:38 + set +o interactive-comments 23:12:38 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:38 + set +o xtrace 23:12:38 ++ echo hxB 23:12:38 ++ sed 's/./& /g' 23:12:38 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:38 + set +h 23:12:38 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:38 + set +x 23:12:38 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:38 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:38 + export TEST_OPTIONS= 23:12:38 + TEST_OPTIONS= 23:12:38 ++ mktemp -d 23:12:38 + WORKDIR=/tmp/tmp.701UMRpgdq 23:12:38 + cd /tmp/tmp.701UMRpgdq 23:12:38 + docker login -u docker -p docker nexus3.onap.org:10001 23:12:38 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 23:12:38 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 23:12:38 Configure a credential helper to remove this warning. See 23:12:38 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 23:12:38 23:12:38 Login Succeeded 23:12:38 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:38 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:38 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 23:12:38 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:38 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:38 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:38 + relax_set 23:12:38 + set +e 23:12:38 + set +o pipefail 23:12:38 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:38 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 23:12:38 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:38 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 23:12:38 +++ GERRIT_BRANCH=master 23:12:38 +++ echo GERRIT_BRANCH=master 23:12:38 GERRIT_BRANCH=master 23:12:38 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:12:38 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 23:12:38 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 23:12:38 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 23:12:39 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:39 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:39 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:39 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:39 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:39 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:39 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 23:12:39 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:39 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:12:39 +++ grafana=false 23:12:39 +++ gui=false 23:12:39 +++ [[ 2 -gt 0 ]] 23:12:39 +++ key=apex-pdp 23:12:39 +++ case $key in 23:12:39 +++ echo apex-pdp 23:12:39 apex-pdp 23:12:39 +++ component=apex-pdp 23:12:39 +++ shift 23:12:39 +++ [[ 1 -gt 0 ]] 23:12:39 +++ key=--grafana 23:12:39 +++ case $key in 23:12:39 +++ grafana=true 23:12:39 +++ shift 23:12:39 +++ [[ 0 -gt 0 ]] 23:12:39 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:12:39 +++ echo 'Configuring docker compose...' 23:12:39 Configuring docker compose... 23:12:39 +++ source export-ports.sh 23:12:39 +++ source get-versions.sh 23:12:42 +++ '[' -z pap ']' 23:12:42 +++ '[' -n apex-pdp ']' 23:12:42 +++ '[' apex-pdp == logs ']' 23:12:42 +++ '[' true = true ']' 23:12:42 +++ echo 'Starting apex-pdp application with Grafana' 23:12:42 Starting apex-pdp application with Grafana 23:12:42 +++ docker-compose up -d apex-pdp grafana 23:12:42 Creating network "compose_default" with the default driver 23:12:42 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 23:12:43 latest: Pulling from prom/prometheus 23:12:46 Digest: sha256:4f6c47e39a9064028766e8c95890ed15690c30f00c4ba14e7ce6ae1ded0295b1 23:12:46 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 23:12:46 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 23:12:46 latest: Pulling from grafana/grafana 23:12:50 Digest: sha256:7d5faae481a4c6f436c99e98af11534f7fd5e8d3e35213552dd1dd02bc393d2e 23:12:50 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 23:12:50 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 23:12:51 10.10.2: Pulling from mariadb 23:12:55 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 23:12:55 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 23:12:55 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 23:12:55 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator 23:12:59 Digest: sha256:d8f1d8ae67fc0b53114a44577cb43c90a3a3281908d2f2418d7fbd203413bd6a 23:12:59 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT 23:12:59 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 23:12:59 latest: Pulling from confluentinc/cp-zookeeper 23:13:12 Digest: sha256:4dc780642bfc5ec3a2d4901e2ff1f9ddef7f7c5c0b793e1e2911cbfb4e3a3214 23:13:12 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 23:13:12 Pulling kafka (confluentinc/cp-kafka:latest)... 23:13:13 latest: Pulling from confluentinc/cp-kafka 23:13:16 Digest: sha256:620734d9fc0bb1f9886932e5baf33806074469f40e3fe246a3fdbb59309535fa 23:13:16 Status: Downloaded newer image for confluentinc/cp-kafka:latest 23:13:16 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 23:13:16 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator 23:13:22 Digest: sha256:bb84cf3d3a5fa846e94bde98a8ed8f440af1422e832e1525aab46fcce821d237 23:13:22 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT 23:13:22 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 23:13:23 3.1.2-SNAPSHOT: Pulling from onap/policy-api 23:13:24 Digest: sha256:0e8cbccfee833c5b2be68d71dd51902b884e77df24bbbac2751693f58bdc20ce 23:13:24 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT 23:13:24 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 23:13:24 3.1.2-SNAPSHOT: Pulling from onap/policy-pap 23:13:26 Digest: sha256:4424490684da433df5069c1f1dbbafe83fffd4c8b6a174807fb10d6443ecef06 23:13:26 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT 23:13:26 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 23:13:26 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp 23:13:34 Digest: sha256:75a74a87b7345e553563fbe2ececcd2285ed9500fd91489d9968ae81123b9982 23:13:34 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT 23:13:34 Creating simulator ... 23:13:34 Creating zookeeper ... 23:13:34 Creating mariadb ... 23:13:34 Creating prometheus ... 23:13:48 Creating mariadb ... done 23:13:48 Creating policy-db-migrator ... 23:13:49 Creating policy-db-migrator ... done 23:13:49 Creating policy-api ... 23:13:50 Creating policy-api ... done 23:13:51 Creating zookeeper ... done 23:13:51 Creating kafka ... 23:13:52 Creating kafka ... done 23:13:52 Creating policy-pap ... 23:13:53 Creating prometheus ... done 23:13:53 Creating grafana ... 23:13:54 Creating grafana ... done 23:13:55 Creating simulator ... done 23:13:56 Creating policy-pap ... done 23:13:56 Creating policy-apex-pdp ... 23:13:57 Creating policy-apex-pdp ... done 23:13:57 +++ echo 'Prometheus server: http://localhost:30259' 23:13:57 Prometheus server: http://localhost:30259 23:13:57 +++ echo 'Grafana server: http://localhost:30269' 23:13:57 Grafana server: http://localhost:30269 23:13:57 +++ cd /w/workspace/policy-pap-master-project-csit-pap 23:13:57 ++ sleep 10 23:14:07 ++ unset http_proxy https_proxy 23:14:07 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 23:14:07 Waiting for REST to come up on localhost port 30003... 23:14:07 NAMES STATUS 23:14:07 policy-apex-pdp Up 10 seconds 23:14:07 grafana Up 13 seconds 23:14:07 policy-pap Up 11 seconds 23:14:07 kafka Up 15 seconds 23:14:07 policy-api Up 17 seconds 23:14:07 mariadb Up 19 seconds 23:14:07 prometheus Up 14 seconds 23:14:07 zookeeper Up 16 seconds 23:14:07 simulator Up 12 seconds 23:14:12 NAMES STATUS 23:14:12 policy-apex-pdp Up 15 seconds 23:14:12 grafana Up 18 seconds 23:14:12 policy-pap Up 16 seconds 23:14:12 kafka Up 20 seconds 23:14:12 policy-api Up 22 seconds 23:14:12 mariadb Up 24 seconds 23:14:12 prometheus Up 19 seconds 23:14:12 zookeeper Up 21 seconds 23:14:12 simulator Up 17 seconds 23:14:17 NAMES STATUS 23:14:17 policy-apex-pdp Up 20 seconds 23:14:17 grafana Up 23 seconds 23:14:17 policy-pap Up 21 seconds 23:14:17 kafka Up 25 seconds 23:14:17 policy-api Up 27 seconds 23:14:17 mariadb Up 29 seconds 23:14:17 prometheus Up 24 seconds 23:14:17 zookeeper Up 26 seconds 23:14:17 simulator Up 22 seconds 23:14:23 NAMES STATUS 23:14:23 policy-apex-pdp Up 25 seconds 23:14:23 grafana Up 28 seconds 23:14:23 policy-pap Up 26 seconds 23:14:23 kafka Up 30 seconds 23:14:23 policy-api Up 32 seconds 23:14:23 mariadb Up 34 seconds 23:14:23 prometheus Up 29 seconds 23:14:23 zookeeper Up 31 seconds 23:14:23 simulator Up 27 seconds 23:14:28 NAMES STATUS 23:14:28 policy-apex-pdp Up 30 seconds 23:14:28 grafana Up 33 seconds 23:14:28 policy-pap Up 31 seconds 23:14:28 kafka Up 35 seconds 23:14:28 policy-api Up 37 seconds 23:14:28 mariadb Up 39 seconds 23:14:28 prometheus Up 34 seconds 23:14:28 zookeeper Up 36 seconds 23:14:28 simulator Up 32 seconds 23:14:28 ++ export 'SUITES=pap-test.robot 23:14:28 pap-slas.robot' 23:14:28 ++ SUITES='pap-test.robot 23:14:28 pap-slas.robot' 23:14:28 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:28 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:14:28 + load_set 23:14:28 + _setopts=hxB 23:14:28 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:14:28 ++ tr : ' ' 23:14:28 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:28 + set +o braceexpand 23:14:28 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:28 + set +o hashall 23:14:28 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:28 + set +o interactive-comments 23:14:28 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:28 + set +o xtrace 23:14:28 ++ sed 's/./& /g' 23:14:28 ++ echo hxB 23:14:28 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:14:28 + set +h 23:14:28 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:14:28 + set +x 23:14:28 + docker_stats 23:14:28 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 23:14:28 ++ uname -s 23:14:28 + '[' Linux == Darwin ']' 23:14:28 + sh -c 'top -bn1 | head -3' 23:14:28 top - 23:14:28 up 4 min, 0 users, load average: 3.47, 1.59, 0.64 23:14:28 Tasks: 211 total, 1 running, 131 sleeping, 0 stopped, 0 zombie 23:14:28 %Cpu(s): 14.6 us, 3.1 sy, 0.0 ni, 78.9 id, 3.3 wa, 0.0 hi, 0.1 si, 0.1 st 23:14:28 + echo 23:14:28 + sh -c 'free -h' 23:14:28 23:14:28 total used free shared buff/cache available 23:14:28 Mem: 31G 2.7G 22G 1.3M 6.2G 28G 23:14:28 Swap: 1.0G 0B 1.0G 23:14:28 + echo 23:14:28 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:14:28 23:14:28 NAMES STATUS 23:14:28 policy-apex-pdp Up 30 seconds 23:14:28 grafana Up 33 seconds 23:14:28 policy-pap Up 31 seconds 23:14:28 kafka Up 35 seconds 23:14:28 policy-api Up 37 seconds 23:14:28 mariadb Up 39 seconds 23:14:28 prometheus Up 34 seconds 23:14:28 zookeeper Up 36 seconds 23:14:28 simulator Up 32 seconds 23:14:28 + echo 23:14:28 + docker stats --no-stream 23:14:28 23:14:30 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:14:30 86a72bd9ab0b policy-apex-pdp 1.18% 186.5MiB / 31.41GiB 0.58% 6.89kB / 6.63kB 0B / 0B 48 23:14:30 3bcc83e34990 grafana 0.18% 54.88MiB / 31.41GiB 0.17% 18.7kB / 3.25kB 0B / 24.9MB 19 23:14:30 2b1269d6729c policy-pap 2.17% 560.2MiB / 31.41GiB 1.74% 30.8kB / 32.3kB 0B / 149MB 62 23:14:30 f4353fc65ad6 kafka 1.14% 366.7MiB / 31.41GiB 1.14% 69.1kB / 72.5kB 0B / 475kB 83 23:14:30 9a9b888d1941 policy-api 0.10% 504.3MiB / 31.41GiB 1.57% 988kB / 647kB 0B / 0B 52 23:14:30 d9b7cf75b04b mariadb 0.02% 102.4MiB / 31.41GiB 0.32% 935kB / 1.18MB 11MB / 64.3MB 37 23:14:30 2e834a1d7895 prometheus 0.04% 18.65MiB / 31.41GiB 0.06% 1.15kB / 0B 0B / 0B 13 23:14:30 df6985b66398 zookeeper 0.11% 100.6MiB / 31.41GiB 0.31% 55.7kB / 49.3kB 127kB / 406kB 60 23:14:30 51ed40dbd330 simulator 0.07% 121.9MiB / 31.41GiB 0.38% 1.1kB / 0B 0B / 0B 76 23:14:30 + echo 23:14:30 23:14:30 + cd /tmp/tmp.701UMRpgdq 23:14:30 + echo 'Reading the testplan:' 23:14:30 Reading the testplan: 23:14:30 + echo 'pap-test.robot 23:14:30 pap-slas.robot' 23:14:30 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 23:14:30 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 23:14:30 + cat testplan.txt 23:14:30 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 23:14:30 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:14:30 ++ xargs 23:14:30 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 23:14:30 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:30 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:14:30 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:30 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:14:30 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 23:14:30 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 23:14:30 + relax_set 23:14:30 + set +e 23:14:30 + set +o pipefail 23:14:30 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:14:31 ============================================================================== 23:14:31 pap 23:14:31 ============================================================================== 23:14:31 pap.Pap-Test 23:14:31 ============================================================================== 23:14:32 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 23:14:32 ------------------------------------------------------------------------------ 23:14:32 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 23:14:32 ------------------------------------------------------------------------------ 23:14:33 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 23:14:33 ------------------------------------------------------------------------------ 23:14:33 Healthcheck :: Verify policy pap health check | PASS | 23:14:33 ------------------------------------------------------------------------------ 23:14:53 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 23:14:53 ------------------------------------------------------------------------------ 23:14:53 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 23:14:53 ------------------------------------------------------------------------------ 23:14:54 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 23:14:54 ------------------------------------------------------------------------------ 23:14:54 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 23:14:54 ------------------------------------------------------------------------------ 23:14:54 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 23:14:54 ------------------------------------------------------------------------------ 23:14:55 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 23:14:55 ------------------------------------------------------------------------------ 23:14:55 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 23:14:55 ------------------------------------------------------------------------------ 23:14:55 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 23:14:55 ------------------------------------------------------------------------------ 23:14:55 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 23:14:55 ------------------------------------------------------------------------------ 23:14:55 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 23:14:55 ------------------------------------------------------------------------------ 23:14:56 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 23:14:56 ------------------------------------------------------------------------------ 23:14:56 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 23:14:56 ------------------------------------------------------------------------------ 23:14:56 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 23:14:56 ------------------------------------------------------------------------------ 23:15:16 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 23:15:16 ------------------------------------------------------------------------------ 23:15:16 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 23:15:16 ------------------------------------------------------------------------------ 23:15:16 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 23:15:16 ------------------------------------------------------------------------------ 23:15:17 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 23:15:17 ------------------------------------------------------------------------------ 23:15:17 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 23:15:17 ------------------------------------------------------------------------------ 23:15:17 pap.Pap-Test | PASS | 23:15:17 22 tests, 22 passed, 0 failed 23:15:17 ============================================================================== 23:15:17 pap.Pap-Slas 23:15:17 ============================================================================== 23:16:17 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 23:16:17 ------------------------------------------------------------------------------ 23:16:17 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 23:16:17 ------------------------------------------------------------------------------ 23:16:17 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 23:16:17 ------------------------------------------------------------------------------ 23:16:17 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 23:16:17 ------------------------------------------------------------------------------ 23:16:17 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 23:16:17 ------------------------------------------------------------------------------ 23:16:17 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 23:16:17 ------------------------------------------------------------------------------ 23:16:17 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 23:16:17 ------------------------------------------------------------------------------ 23:16:17 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 23:16:17 ------------------------------------------------------------------------------ 23:16:17 pap.Pap-Slas | PASS | 23:16:17 8 tests, 8 passed, 0 failed 23:16:17 ============================================================================== 23:16:17 pap | PASS | 23:16:17 30 tests, 30 passed, 0 failed 23:16:17 ============================================================================== 23:16:17 Output: /tmp/tmp.701UMRpgdq/output.xml 23:16:17 Log: /tmp/tmp.701UMRpgdq/log.html 23:16:17 Report: /tmp/tmp.701UMRpgdq/report.html 23:16:17 + RESULT=0 23:16:17 + load_set 23:16:17 + _setopts=hxB 23:16:17 ++ tr : ' ' 23:16:17 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:16:17 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:17 + set +o braceexpand 23:16:17 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:17 + set +o hashall 23:16:17 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:17 + set +o interactive-comments 23:16:17 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:17 + set +o xtrace 23:16:17 ++ echo hxB 23:16:17 ++ sed 's/./& /g' 23:16:17 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:17 + set +h 23:16:17 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:17 + set +x 23:16:17 + echo 'RESULT: 0' 23:16:17 RESULT: 0 23:16:17 + exit 0 23:16:17 + on_exit 23:16:17 + rc=0 23:16:17 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 23:16:17 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:17 NAMES STATUS 23:16:17 policy-apex-pdp Up 2 minutes 23:16:17 grafana Up 2 minutes 23:16:17 policy-pap Up 2 minutes 23:16:17 kafka Up 2 minutes 23:16:17 policy-api Up 2 minutes 23:16:17 mariadb Up 2 minutes 23:16:17 prometheus Up 2 minutes 23:16:17 zookeeper Up 2 minutes 23:16:17 simulator Up 2 minutes 23:16:17 + docker_stats 23:16:17 ++ uname -s 23:16:17 + '[' Linux == Darwin ']' 23:16:17 + sh -c 'top -bn1 | head -3' 23:16:17 top - 23:16:17 up 6 min, 0 users, load average: 0.88, 1.25, 0.62 23:16:17 Tasks: 201 total, 1 running, 129 sleeping, 0 stopped, 0 zombie 23:16:17 %Cpu(s): 11.5 us, 2.3 sy, 0.0 ni, 83.6 id, 2.6 wa, 0.0 hi, 0.1 si, 0.1 st 23:16:17 + echo 23:16:17 23:16:17 + sh -c 'free -h' 23:16:17 total used free shared buff/cache available 23:16:17 Mem: 31G 2.8G 22G 1.3M 6.2G 28G 23:16:17 Swap: 1.0G 0B 1.0G 23:16:17 + echo 23:16:17 23:16:17 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:17 NAMES STATUS 23:16:17 policy-apex-pdp Up 2 minutes 23:16:17 grafana Up 2 minutes 23:16:17 policy-pap Up 2 minutes 23:16:17 kafka Up 2 minutes 23:16:17 policy-api Up 2 minutes 23:16:17 mariadb Up 2 minutes 23:16:17 prometheus Up 2 minutes 23:16:17 zookeeper Up 2 minutes 23:16:17 simulator Up 2 minutes 23:16:17 + echo 23:16:17 23:16:17 + docker stats --no-stream 23:16:20 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:16:20 86a72bd9ab0b policy-apex-pdp 1.25% 180.2MiB / 31.41GiB 0.56% 56.1kB / 90.6kB 0B / 0B 52 23:16:20 3bcc83e34990 grafana 0.07% 59.48MiB / 31.41GiB 0.18% 19.8kB / 4.48kB 0B / 24.9MB 19 23:16:20 2b1269d6729c policy-pap 0.79% 483MiB / 31.41GiB 1.50% 2.47MB / 1.04MB 0B / 149MB 66 23:16:20 f4353fc65ad6 kafka 1.45% 400.9MiB / 31.41GiB 1.25% 239kB / 215kB 0B / 573kB 85 23:16:20 9a9b888d1941 policy-api 0.10% 570.8MiB / 31.41GiB 1.77% 2.45MB / 1.1MB 0B / 0B 55 23:16:20 d9b7cf75b04b mariadb 0.01% 103.7MiB / 31.41GiB 0.32% 2.02MB / 4.87MB 11MB / 64.5MB 28 23:16:20 2e834a1d7895 prometheus 0.00% 24.96MiB / 31.41GiB 0.08% 181kB / 10.9kB 0B / 0B 13 23:16:20 df6985b66398 zookeeper 0.09% 100.7MiB / 31.41GiB 0.31% 58.6kB / 50.9kB 127kB / 406kB 60 23:16:20 51ed40dbd330 simulator 0.07% 122.1MiB / 31.41GiB 0.38% 1.45kB / 0B 0B / 0B 78 23:16:20 + echo 23:16:20 23:16:20 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:20 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 23:16:20 + relax_set 23:16:20 + set +e 23:16:20 + set +o pipefail 23:16:20 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:20 ++ echo 'Shut down started!' 23:16:20 Shut down started! 23:16:20 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:16:20 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:16:20 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:16:20 ++ source export-ports.sh 23:16:20 ++ source get-versions.sh 23:16:22 ++ echo 'Collecting logs from docker compose containers...' 23:16:22 Collecting logs from docker compose containers... 23:16:22 ++ docker-compose logs 23:16:23 ++ cat docker_compose.log 23:16:23 Attaching to policy-apex-pdp, grafana, policy-pap, kafka, policy-api, policy-db-migrator, mariadb, prometheus, zookeeper, simulator 23:16:23 kafka | ===> User 23:16:23 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:23 kafka | ===> Configuring ... 23:16:23 kafka | Running in Zookeeper mode... 23:16:23 kafka | ===> Running preflight checks ... 23:16:23 kafka | ===> Check if /var/lib/kafka/data is writable ... 23:16:23 kafka | ===> Check if Zookeeper is healthy ... 23:16:23 kafka | [2024-04-23 23:13:56,747] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:56,747] INFO Client environment:host.name=f4353fc65ad6 (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:56,747] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:56,747] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:56,747] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:56,747] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.1-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.6.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.1-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.1-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.6.1.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.1.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.1-ccs.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.1-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.1-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:56,748] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:56,748] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:56,748] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:56,748] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:56,748] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:56,748] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:56,748] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:56,748] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:56,748] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:56,748] INFO Client environment:os.memory.free=494MB (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:56,748] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:56,748] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:56,751] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@61d47554 (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:56,754] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:23 kafka | [2024-04-23 23:13:56,765] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:23 kafka | [2024-04-23 23:13:56,786] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:23 kafka | [2024-04-23 23:13:56,862] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 23:16:23 kafka | [2024-04-23 23:13:56,863] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 23:16:23 kafka | [2024-04-23 23:13:56,872] INFO Socket connection established, initiating session, client: /172.17.0.8:47266, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 23:16:23 kafka | [2024-04-23 23:13:56,910] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000003622b0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 23:16:23 kafka | [2024-04-23 23:13:57,035] INFO Session: 0x1000003622b0000 closed (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:57,035] INFO EventThread shut down for session: 0x1000003622b0000 (org.apache.zookeeper.ClientCnxn) 23:16:23 kafka | Using log4j config /etc/kafka/log4j.properties 23:16:23 kafka | ===> Launching ... 23:16:23 kafka | ===> Launching kafka ... 23:16:23 kafka | [2024-04-23 23:13:57,826] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 23:16:23 kafka | [2024-04-23 23:13:58,158] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:23 kafka | [2024-04-23 23:13:58,229] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 23:16:23 kafka | [2024-04-23 23:13:58,230] INFO starting (kafka.server.KafkaServer) 23:16:23 kafka | [2024-04-23 23:13:58,231] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 23:16:23 kafka | [2024-04-23 23:13:58,253] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 23:16:23 kafka | [2024-04-23 23:13:58,258] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:58,258] INFO Client environment:host.name=f4353fc65ad6 (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:58,258] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:58,258] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:58,258] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:58,258] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:58,259] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:23 grafana | logger=settings t=2024-04-23T23:13:54.785952456Z level=info msg="Starting Grafana" version=10.4.2 commit=701c851be7a930e04fbc6ebb1cd4254da80edd4c branch=v10.4.x compiled=2024-04-23T23:13:54Z 23:16:23 grafana | logger=settings t=2024-04-23T23:13:54.786407014Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 23:16:23 grafana | logger=settings t=2024-04-23T23:13:54.786423074Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 23:16:23 grafana | logger=settings t=2024-04-23T23:13:54.786430644Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 23:16:23 grafana | logger=settings t=2024-04-23T23:13:54.786434114Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 23:16:23 grafana | logger=settings t=2024-04-23T23:13:54.786437314Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 23:16:23 grafana | logger=settings t=2024-04-23T23:13:54.786441414Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 23:16:23 grafana | logger=settings t=2024-04-23T23:13:54.786444104Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 23:16:23 grafana | logger=settings t=2024-04-23T23:13:54.786449324Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 23:16:23 grafana | logger=settings t=2024-04-23T23:13:54.786452534Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 23:16:23 grafana | logger=settings t=2024-04-23T23:13:54.786456025Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 23:16:23 grafana | logger=settings t=2024-04-23T23:13:54.786460155Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 23:16:23 grafana | logger=settings t=2024-04-23T23:13:54.786463655Z level=info msg=Target target=[all] 23:16:23 policy-api | Waiting for mariadb port 3306... 23:16:23 policy-api | mariadb (172.17.0.4:3306) open 23:16:23 policy-api | Waiting for policy-db-migrator port 6824... 23:16:23 policy-api | policy-db-migrator (172.17.0.6:6824) open 23:16:23 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 23:16:23 policy-api | 23:16:23 policy-api | . ____ _ __ _ _ 23:16:23 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:23 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:23 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:23 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:23 policy-api | =========|_|==============|___/=/_/_/_/ 23:16:23 policy-api | :: Spring Boot :: (v3.1.10) 23:16:23 policy-api | 23:16:23 policy-api | [2024-04-23T23:14:03.854+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 23:16:23 policy-api | [2024-04-23T23:14:03.924+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 22 (/app/api.jar started by policy in /opt/app/policy/api/bin) 23:16:23 policy-api | [2024-04-23T23:14:03.925+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:23 policy-api | [2024-04-23T23:14:06.056+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:23 policy-api | [2024-04-23T23:14:06.145+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 79 ms. Found 6 JPA repository interfaces. 23:16:23 policy-api | [2024-04-23T23:14:06.626+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:23 policy-api | [2024-04-23T23:14:06.628+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:23 policy-api | [2024-04-23T23:14:07.362+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:23 policy-api | [2024-04-23T23:14:07.373+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:23 policy-api | [2024-04-23T23:14:07.376+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:23 policy-api | [2024-04-23T23:14:07.376+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 23:16:23 policy-api | [2024-04-23T23:14:07.480+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:23 policy-api | [2024-04-23T23:14:07.480+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3471 ms 23:16:23 policy-api | [2024-04-23T23:14:07.969+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:23 policy-api | [2024-04-23T23:14:08.058+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final 23:16:23 policy-api | [2024-04-23T23:14:08.121+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:23 policy-api | [2024-04-23T23:14:08.432+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:23 policy-api | [2024-04-23T23:14:08.473+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:23 policy-api | [2024-04-23T23:14:08.589+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@336206d8 23:16:23 policy-api | [2024-04-23T23:14:08.592+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:23 policy-api | [2024-04-23T23:14:10.775+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:23 policy-api | [2024-04-23T23:14:10.779+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:23 policy-api | [2024-04-23T23:14:11.819+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 23:16:23 policy-api | [2024-04-23T23:14:12.727+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 23:16:23 policy-api | [2024-04-23T23:14:13.818+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:23 policy-api | [2024-04-23T23:14:14.082+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@d9c54cd, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@1b786da0, org.springframework.security.web.context.SecurityContextHolderFilter@280aa1bd, org.springframework.security.web.header.HeaderWriterFilter@2404ab3a, org.springframework.security.web.authentication.logout.LogoutFilter@5c5dd9ac, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@6f54a7be, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@746f2b91, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@109d3527, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@7c453a1f, org.springframework.security.web.access.ExceptionTranslationFilter@446717fb, org.springframework.security.web.access.intercept.AuthorizationFilter@24287e5e] 23:16:23 policy-api | [2024-04-23T23:14:15.032+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:23 policy-api | [2024-04-23T23:14:15.132+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:23 mariadb | 2024-04-23 23:13:48+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:23 mariadb | 2024-04-23 23:13:48+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 23:16:23 mariadb | 2024-04-23 23:13:48+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:23 mariadb | 2024-04-23 23:13:48+00:00 [Note] [Entrypoint]: Initializing database files 23:16:23 mariadb | 2024-04-23 23:13:48 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:23 mariadb | 2024-04-23 23:13:48 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:23 mariadb | 2024-04-23 23:13:49 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:23 mariadb | 23:16:23 mariadb | 23:16:23 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 23:16:23 mariadb | To do so, start the server, then issue the following command: 23:16:23 mariadb | 23:16:23 mariadb | '/usr/bin/mysql_secure_installation' 23:16:23 mariadb | 23:16:23 mariadb | which will also give you the option of removing the test 23:16:23 mariadb | databases and anonymous user created by default. This is 23:16:23 mariadb | strongly recommended for production servers. 23:16:23 mariadb | 23:16:23 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 23:16:23 mariadb | 23:16:23 mariadb | Please report any problems at https://mariadb.org/jira 23:16:23 mariadb | 23:16:23 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 23:16:23 mariadb | 23:16:23 mariadb | Consider joining MariaDB's strong and vibrant community: 23:16:23 mariadb | https://mariadb.org/get-involved/ 23:16:23 mariadb | 23:16:23 mariadb | 2024-04-23 23:13:50+00:00 [Note] [Entrypoint]: Database files initialized 23:16:23 mariadb | 2024-04-23 23:13:50+00:00 [Note] [Entrypoint]: Starting temporary server 23:16:23 mariadb | 2024-04-23 23:13:50+00:00 [Note] [Entrypoint]: Waiting for server startup 23:16:23 mariadb | 2024-04-23 23:13:50 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 96 ... 23:16:23 mariadb | 2024-04-23 23:13:50 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:23 mariadb | 2024-04-23 23:13:50 0 [Note] InnoDB: Number of transaction pools: 1 23:16:23 mariadb | 2024-04-23 23:13:50 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:23 mariadb | 2024-04-23 23:13:50 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:23 mariadb | 2024-04-23 23:13:50 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:23 mariadb | 2024-04-23 23:13:50 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:23 mariadb | 2024-04-23 23:13:50 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:23 mariadb | 2024-04-23 23:13:50 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:23 mariadb | 2024-04-23 23:13:50 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:23 mariadb | 2024-04-23 23:13:50 0 [Note] InnoDB: 128 rollback segments are active. 23:16:23 policy-api | [2024-04-23T23:14:15.165+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 23:16:23 policy-api | [2024-04-23T23:14:15.186+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 12.097 seconds (process running for 12.786) 23:16:23 policy-api | [2024-04-23T23:14:31.313+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:23 policy-api | [2024-04-23T23:14:31.313+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 23:16:23 policy-api | [2024-04-23T23:14:31.315+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms 23:16:23 policy-api | [2024-04-23T23:14:31.638+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: 23:16:23 policy-api | [] 23:16:23 grafana | logger=settings t=2024-04-23T23:13:54.786470105Z level=info msg="Path Home" path=/usr/share/grafana 23:16:23 grafana | logger=settings t=2024-04-23T23:13:54.786474105Z level=info msg="Path Data" path=/var/lib/grafana 23:16:23 grafana | logger=settings t=2024-04-23T23:13:54.786479015Z level=info msg="Path Logs" path=/var/log/grafana 23:16:23 grafana | logger=settings t=2024-04-23T23:13:54.786482335Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 23:16:23 grafana | logger=settings t=2024-04-23T23:13:54.786486475Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 23:16:23 grafana | logger=settings t=2024-04-23T23:13:54.786490535Z level=info msg="App mode production" 23:16:23 grafana | logger=sqlstore t=2024-04-23T23:13:54.7868101Z level=info msg="Connecting to DB" dbtype=sqlite3 23:16:23 grafana | logger=sqlstore t=2024-04-23T23:13:54.786835811Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.787590263Z level=info msg="Starting DB migrations" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.788748803Z level=info msg="Executing migration" id="create migration_log table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.789705649Z level=info msg="Migration successfully executed" id="create migration_log table" duration=954.926µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.796697295Z level=info msg="Executing migration" id="create user table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.797391328Z level=info msg="Migration successfully executed" id="create user table" duration=693.993µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.801340434Z level=info msg="Executing migration" id="add unique index user.login" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.802564184Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.223099ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.806254826Z level=info msg="Executing migration" id="add unique index user.email" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.807525887Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.270371ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.814459364Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.815239766Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=780.672µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.818522911Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.819701581Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.17242ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.823155239Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.826872101Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.712862ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.832317542Z level=info msg="Executing migration" id="create user table v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.833284658Z level=info msg="Migration successfully executed" id="create user table v2" duration=968.866µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.837433528Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.838656478Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.22255ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.842241389Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.843470909Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.2292ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.848351251Z level=info msg="Executing migration" id="copy data_source v1 to v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.848855249Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=506.848µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.852310137Z level=info msg="Executing migration" id="Drop old table user_v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.853189062Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=878.475µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.856990255Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.858838277Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.843802ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.862157492Z level=info msg="Executing migration" id="Update user table charset" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.862210343Z level=info msg="Migration successfully executed" id="Update user table charset" duration=53.381µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.867328709Z level=info msg="Executing migration" id="Add last_seen_at column to user" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.86859139Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.261661ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.872037327Z level=info msg="Executing migration" id="Add missing user data" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.872374893Z level=info msg="Migration successfully executed" id="Add missing user data" duration=336.876µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.875605088Z level=info msg="Executing migration" id="Add is_disabled column to user" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.876871668Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.25501ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.880071362Z level=info msg="Executing migration" id="Add index user.login/user.email" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.880931156Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=859.464µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.886257295Z level=info msg="Executing migration" id="Add is_service_account column to user" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.887912103Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.652998ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.891604925Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.90141066Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=9.806455ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.904733555Z level=info msg="Executing migration" id="Add uid column to user" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.906016046Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.280331ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.909236531Z level=info msg="Executing migration" id="Update uid column values for users" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.909527025Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=290.084µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.914392366Z level=info msg="Executing migration" id="Add unique index user_uid" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.915248611Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=862.255µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.919893338Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 23:16:23 policy-apex-pdp | Waiting for mariadb port 3306... 23:16:23 policy-apex-pdp | Waiting for kafka port 9092... 23:16:23 policy-apex-pdp | mariadb (172.17.0.4:3306) open 23:16:23 policy-apex-pdp | kafka (172.17.0.8:9092) open 23:16:23 policy-apex-pdp | Waiting for pap port 6969... 23:16:23 policy-apex-pdp | pap (172.17.0.9:6969) open 23:16:23 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 23:16:23 policy-apex-pdp | [2024-04-23T23:14:27.674+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 23:16:23 policy-apex-pdp | [2024-04-23T23:14:27.914+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:23 policy-apex-pdp | allow.auto.create.topics = true 23:16:23 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:23 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:23 policy-apex-pdp | auto.offset.reset = latest 23:16:23 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:23 policy-apex-pdp | check.crcs = true 23:16:23 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:23 policy-apex-pdp | client.id = consumer-dd2a8f8f-9499-4211-bd29-a21fd7f46681-1 23:16:23 policy-apex-pdp | client.rack = 23:16:23 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:23 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:23 policy-apex-pdp | enable.auto.commit = true 23:16:23 policy-apex-pdp | exclude.internal.topics = true 23:16:23 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:23 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:23 policy-apex-pdp | fetch.min.bytes = 1 23:16:23 policy-apex-pdp | group.id = dd2a8f8f-9499-4211-bd29-a21fd7f46681 23:16:23 policy-apex-pdp | group.instance.id = null 23:16:23 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:23 policy-apex-pdp | interceptor.classes = [] 23:16:23 policy-apex-pdp | internal.leave.group.on.close = true 23:16:23 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:23 policy-apex-pdp | isolation.level = read_uncommitted 23:16:23 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:23 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:23 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:23 policy-apex-pdp | max.poll.records = 500 23:16:23 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:23 policy-apex-pdp | metric.reporters = [] 23:16:23 policy-apex-pdp | metrics.num.samples = 2 23:16:23 policy-apex-pdp | metrics.recording.level = INFO 23:16:23 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:23 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:23 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:23 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:23 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:23 policy-apex-pdp | request.timeout.ms = 30000 23:16:23 policy-apex-pdp | retry.backoff.ms = 100 23:16:23 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:23 policy-apex-pdp | sasl.jaas.config = null 23:16:23 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:23 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:23 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:23 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:23 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:23 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:23 policy-apex-pdp | sasl.login.class = null 23:16:23 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:23 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:23 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:23 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:23 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:23 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:23 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:23 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:23 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:23 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:23 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:23 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:23 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:23 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:23 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:23 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:23 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:23 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:23 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:23 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:23 policy-apex-pdp | security.providers = null 23:16:23 policy-apex-pdp | send.buffer.bytes = 131072 23:16:23 policy-apex-pdp | session.timeout.ms = 45000 23:16:23 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:23 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:23 policy-apex-pdp | ssl.cipher.suites = null 23:16:23 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:23 kafka | [2024-04-23 23:13:58,259] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:58,259] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:58,259] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:58,259] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:58,259] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:58,259] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:58,259] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:58,259] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:58,259] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:58,259] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:58,259] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:58,261] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@447a020 (org.apache.zookeeper.ZooKeeper) 23:16:23 kafka | [2024-04-23 23:13:58,266] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:23 kafka | [2024-04-23 23:13:58,272] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:23 kafka | [2024-04-23 23:13:58,275] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 23:16:23 kafka | [2024-04-23 23:13:58,278] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 23:16:23 kafka | [2024-04-23 23:13:58,289] INFO Socket connection established, initiating session, client: /172.17.0.8:47268, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 23:16:23 kafka | [2024-04-23 23:13:58,298] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000003622b0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 23:16:23 kafka | [2024-04-23 23:13:58,305] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 23:16:23 kafka | [2024-04-23 23:13:58,614] INFO Cluster ID = xy0CN7giRUOzslts55W0Ww (kafka.server.KafkaServer) 23:16:23 kafka | [2024-04-23 23:13:58,617] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 23:16:23 kafka | [2024-04-23 23:13:58,673] INFO KafkaConfig values: 23:16:23 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 23:16:23 kafka | alter.config.policy.class.name = null 23:16:23 kafka | alter.log.dirs.replication.quota.window.num = 11 23:16:23 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 23:16:23 kafka | authorizer.class.name = 23:16:23 kafka | auto.create.topics.enable = true 23:16:23 kafka | auto.include.jmx.reporter = true 23:16:23 kafka | auto.leader.rebalance.enable = true 23:16:23 kafka | background.threads = 10 23:16:23 kafka | broker.heartbeat.interval.ms = 2000 23:16:23 kafka | broker.id = 1 23:16:23 kafka | broker.id.generation.enable = true 23:16:23 kafka | broker.rack = null 23:16:23 kafka | broker.session.timeout.ms = 9000 23:16:23 kafka | client.quota.callback.class = null 23:16:23 kafka | compression.type = producer 23:16:23 kafka | connection.failed.authentication.delay.ms = 100 23:16:23 kafka | connections.max.idle.ms = 600000 23:16:23 kafka | connections.max.reauth.ms = 0 23:16:23 kafka | control.plane.listener.name = null 23:16:23 kafka | controlled.shutdown.enable = true 23:16:23 kafka | controlled.shutdown.max.retries = 3 23:16:23 kafka | controlled.shutdown.retry.backoff.ms = 5000 23:16:23 kafka | controller.listener.names = null 23:16:23 kafka | controller.quorum.append.linger.ms = 25 23:16:23 kafka | controller.quorum.election.backoff.max.ms = 1000 23:16:23 kafka | controller.quorum.election.timeout.ms = 1000 23:16:23 kafka | controller.quorum.fetch.timeout.ms = 2000 23:16:23 kafka | controller.quorum.request.timeout.ms = 2000 23:16:23 kafka | controller.quorum.retry.backoff.ms = 20 23:16:23 kafka | controller.quorum.voters = [] 23:16:23 kafka | controller.quota.window.num = 11 23:16:23 kafka | controller.quota.window.size.seconds = 1 23:16:23 kafka | controller.socket.timeout.ms = 30000 23:16:23 kafka | create.topic.policy.class.name = null 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.920724093Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=834.495µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.924664878Z level=info msg="Executing migration" id="create temp user table v1-7" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.926115623Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.450355ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.931393691Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.932287496Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=893.795µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.935791655Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.936952054Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.159159ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.940467843Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.941703384Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.235681ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.947063814Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.947930388Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=867.205µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.951352976Z level=info msg="Executing migration" id="Update temp_user table charset" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.951383487Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=31.1µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.954747312Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.955975843Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.228651ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.959235397Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.960498109Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.256272ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.965701046Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.966388187Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=687.301µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.96955229Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.970299652Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=747.422µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.97554827Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.978634002Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.085202ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.981451849Z level=info msg="Executing migration" id="create temp_user v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.982350785Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=898.356µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.985505987Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.98629628Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=790.233µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.989620556Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.99042178Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=798.324µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.995121828Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.995985163Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=863.115µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.998952603Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:54.999796907Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=844.184µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.002722396Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.003193233Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=470.357µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.008120585Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.008724715Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=603.52µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.011679904Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.012144941Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=464.657µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.016215919Z level=info msg="Executing migration" id="create star table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.016914981Z level=info msg="Migration successfully executed" id="create star table" duration=698.452µs 23:16:23 kafka | default.replication.factor = 1 23:16:23 kafka | delegation.token.expiry.check.interval.ms = 3600000 23:16:23 kafka | delegation.token.expiry.time.ms = 86400000 23:16:23 kafka | delegation.token.master.key = null 23:16:23 kafka | delegation.token.max.lifetime.ms = 604800000 23:16:23 kafka | delegation.token.secret.key = null 23:16:23 kafka | delete.records.purgatory.purge.interval.requests = 1 23:16:23 kafka | delete.topic.enable = true 23:16:23 kafka | early.start.listeners = null 23:16:23 kafka | fetch.max.bytes = 57671680 23:16:23 kafka | fetch.purgatory.purge.interval.requests = 1000 23:16:23 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 23:16:23 kafka | group.consumer.heartbeat.interval.ms = 5000 23:16:23 kafka | group.consumer.max.heartbeat.interval.ms = 15000 23:16:23 kafka | group.consumer.max.session.timeout.ms = 60000 23:16:23 kafka | group.consumer.max.size = 2147483647 23:16:23 kafka | group.consumer.min.heartbeat.interval.ms = 5000 23:16:23 kafka | group.consumer.min.session.timeout.ms = 45000 23:16:23 kafka | group.consumer.session.timeout.ms = 45000 23:16:23 kafka | group.coordinator.new.enable = false 23:16:23 kafka | group.coordinator.threads = 1 23:16:23 kafka | group.initial.rebalance.delay.ms = 3000 23:16:23 kafka | group.max.session.timeout.ms = 1800000 23:16:23 kafka | group.max.size = 2147483647 23:16:23 kafka | group.min.session.timeout.ms = 6000 23:16:23 kafka | initial.broker.registration.timeout.ms = 60000 23:16:23 kafka | inter.broker.listener.name = PLAINTEXT 23:16:23 kafka | inter.broker.protocol.version = 3.6-IV2 23:16:23 kafka | kafka.metrics.polling.interval.secs = 10 23:16:23 kafka | kafka.metrics.reporters = [] 23:16:23 kafka | leader.imbalance.check.interval.seconds = 300 23:16:23 kafka | leader.imbalance.per.broker.percentage = 10 23:16:23 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 23:16:23 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 23:16:23 kafka | log.cleaner.backoff.ms = 15000 23:16:23 kafka | log.cleaner.dedupe.buffer.size = 134217728 23:16:23 kafka | log.cleaner.delete.retention.ms = 86400000 23:16:23 kafka | log.cleaner.enable = true 23:16:23 kafka | log.cleaner.io.buffer.load.factor = 0.9 23:16:23 kafka | log.cleaner.io.buffer.size = 524288 23:16:23 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 23:16:23 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 23:16:23 kafka | log.cleaner.min.cleanable.ratio = 0.5 23:16:23 kafka | log.cleaner.min.compaction.lag.ms = 0 23:16:23 kafka | log.cleaner.threads = 1 23:16:23 kafka | log.cleanup.policy = [delete] 23:16:23 kafka | log.dir = /tmp/kafka-logs 23:16:23 kafka | log.dirs = /var/lib/kafka/data 23:16:23 kafka | log.flush.interval.messages = 9223372036854775807 23:16:23 kafka | log.flush.interval.ms = null 23:16:23 kafka | log.flush.offset.checkpoint.interval.ms = 60000 23:16:23 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 23:16:23 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 23:16:23 kafka | log.index.interval.bytes = 4096 23:16:23 kafka | log.index.size.max.bytes = 10485760 23:16:23 kafka | log.local.retention.bytes = -2 23:16:23 kafka | log.local.retention.ms = -2 23:16:23 kafka | log.message.downconversion.enable = true 23:16:23 kafka | log.message.format.version = 3.0-IV1 23:16:23 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 23:16:23 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 23:16:23 mariadb | 2024-04-23 23:13:50 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:23 mariadb | 2024-04-23 23:13:50 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:23 mariadb | 2024-04-23 23:13:50 0 [Note] InnoDB: log sequence number 46590; transaction id 14 23:16:23 mariadb | 2024-04-23 23:13:50 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:23 mariadb | 2024-04-23 23:13:50 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:23 mariadb | 2024-04-23 23:13:50 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 23:16:23 mariadb | 2024-04-23 23:13:50 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 23:16:23 mariadb | 2024-04-23 23:13:50 0 [Note] mariadbd: ready for connections. 23:16:23 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 23:16:23 mariadb | 2024-04-23 23:13:51+00:00 [Note] [Entrypoint]: Temporary server started. 23:16:23 mariadb | 2024-04-23 23:13:53+00:00 [Note] [Entrypoint]: Creating user policy_user 23:16:23 mariadb | 2024-04-23 23:13:53+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 23:16:23 mariadb | 23:16:23 mariadb | 23:16:23 mariadb | 2024-04-23 23:13:53+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 23:16:23 mariadb | 2024-04-23 23:13:53+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 23:16:23 mariadb | #!/bin/bash -xv 23:16:23 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 23:16:23 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 23:16:23 mariadb | # 23:16:23 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 23:16:23 mariadb | # you may not use this file except in compliance with the License. 23:16:23 mariadb | # You may obtain a copy of the License at 23:16:23 mariadb | # 23:16:23 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 23:16:23 mariadb | # 23:16:23 mariadb | # Unless required by applicable law or agreed to in writing, software 23:16:23 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 23:16:23 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 23:16:23 mariadb | # See the License for the specific language governing permissions and 23:16:23 mariadb | # limitations under the License. 23:16:23 mariadb | 23:16:23 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:23 mariadb | do 23:16:23 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 23:16:23 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 23:16:23 mariadb | done 23:16:23 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:23 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 23:16:23 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:23 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:23 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 23:16:23 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:23 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:23 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 23:16:23 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:23 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:23 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 23:16:23 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:23 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 23:16:23 kafka | log.message.timestamp.type = CreateTime 23:16:23 kafka | log.preallocate = false 23:16:23 kafka | log.retention.bytes = -1 23:16:23 kafka | log.retention.check.interval.ms = 300000 23:16:23 kafka | log.retention.hours = 168 23:16:23 kafka | log.retention.minutes = null 23:16:23 kafka | log.retention.ms = null 23:16:23 kafka | log.roll.hours = 168 23:16:23 kafka | log.roll.jitter.hours = 0 23:16:23 kafka | log.roll.jitter.ms = null 23:16:23 kafka | log.roll.ms = null 23:16:23 kafka | log.segment.bytes = 1073741824 23:16:23 kafka | log.segment.delete.delay.ms = 60000 23:16:23 kafka | max.connection.creation.rate = 2147483647 23:16:23 kafka | max.connections = 2147483647 23:16:23 kafka | max.connections.per.ip = 2147483647 23:16:23 kafka | max.connections.per.ip.overrides = 23:16:23 kafka | max.incremental.fetch.session.cache.slots = 1000 23:16:23 kafka | message.max.bytes = 1048588 23:16:23 kafka | metadata.log.dir = null 23:16:23 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 23:16:23 kafka | metadata.log.max.snapshot.interval.ms = 3600000 23:16:23 kafka | metadata.log.segment.bytes = 1073741824 23:16:23 kafka | metadata.log.segment.min.bytes = 8388608 23:16:23 kafka | metadata.log.segment.ms = 604800000 23:16:23 kafka | metadata.max.idle.interval.ms = 500 23:16:23 kafka | metadata.max.retention.bytes = 104857600 23:16:23 kafka | metadata.max.retention.ms = 604800000 23:16:23 kafka | metric.reporters = [] 23:16:23 kafka | metrics.num.samples = 2 23:16:23 kafka | metrics.recording.level = INFO 23:16:23 kafka | metrics.sample.window.ms = 30000 23:16:23 kafka | min.insync.replicas = 1 23:16:23 kafka | node.id = 1 23:16:23 kafka | num.io.threads = 8 23:16:23 kafka | num.network.threads = 3 23:16:23 kafka | num.partitions = 1 23:16:23 kafka | num.recovery.threads.per.data.dir = 1 23:16:23 kafka | num.replica.alter.log.dirs.threads = null 23:16:23 kafka | num.replica.fetchers = 1 23:16:23 kafka | offset.metadata.max.bytes = 4096 23:16:23 kafka | offsets.commit.required.acks = -1 23:16:23 kafka | offsets.commit.timeout.ms = 5000 23:16:23 kafka | offsets.load.buffer.size = 5242880 23:16:23 kafka | offsets.retention.check.interval.ms = 600000 23:16:23 kafka | offsets.retention.minutes = 10080 23:16:23 kafka | offsets.topic.compression.codec = 0 23:16:23 kafka | offsets.topic.num.partitions = 50 23:16:23 kafka | offsets.topic.replication.factor = 1 23:16:23 kafka | offsets.topic.segment.bytes = 104857600 23:16:23 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 23:16:23 kafka | password.encoder.iterations = 4096 23:16:23 kafka | password.encoder.key.length = 128 23:16:23 kafka | password.encoder.keyfactory.algorithm = null 23:16:23 kafka | password.encoder.old.secret = null 23:16:23 kafka | password.encoder.secret = null 23:16:23 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 23:16:23 kafka | process.roles = [] 23:16:23 kafka | producer.id.expiration.check.interval.ms = 600000 23:16:23 kafka | producer.id.expiration.ms = 86400000 23:16:23 kafka | producer.purgatory.purge.interval.requests = 1000 23:16:23 kafka | queued.max.request.bytes = -1 23:16:23 kafka | queued.max.requests = 500 23:16:23 kafka | quota.window.num = 11 23:16:23 kafka | quota.window.size.seconds = 1 23:16:23 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 23:16:23 kafka | remote.log.manager.task.interval.ms = 30000 23:16:23 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 23:16:23 kafka | remote.log.manager.task.retry.backoff.ms = 500 23:16:23 kafka | remote.log.manager.task.retry.jitter = 0.2 23:16:23 kafka | remote.log.manager.thread.pool.size = 10 23:16:23 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 23:16:23 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 23:16:23 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:23 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 23:16:23 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:23 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:23 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 23:16:23 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:23 mariadb | 23:16:23 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 23:16:23 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 23:16:23 kafka | remote.log.metadata.manager.class.path = null 23:16:23 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 23:16:23 kafka | remote.log.metadata.manager.listener.name = null 23:16:23 kafka | remote.log.reader.max.pending.tasks = 100 23:16:23 kafka | remote.log.reader.threads = 10 23:16:23 kafka | remote.log.storage.manager.class.name = null 23:16:23 kafka | remote.log.storage.manager.class.path = null 23:16:23 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 23:16:23 kafka | remote.log.storage.system.enable = false 23:16:23 kafka | replica.fetch.backoff.ms = 1000 23:16:23 kafka | replica.fetch.max.bytes = 1048576 23:16:23 kafka | replica.fetch.min.bytes = 1 23:16:23 kafka | replica.fetch.response.max.bytes = 10485760 23:16:23 kafka | replica.fetch.wait.max.ms = 500 23:16:23 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 23:16:23 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 23:16:23 mariadb | 23:16:23 mariadb | 2024-04-23 23:13:53+00:00 [Note] [Entrypoint]: Stopping temporary server 23:16:23 mariadb | 2024-04-23 23:13:53 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 23:16:23 mariadb | 2024-04-23 23:13:53 0 [Note] InnoDB: FTS optimize thread exiting. 23:16:23 mariadb | 2024-04-23 23:13:53 0 [Note] InnoDB: Starting shutdown... 23:16:23 mariadb | 2024-04-23 23:13:53 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 23:16:23 mariadb | 2024-04-23 23:13:53 0 [Note] InnoDB: Buffer pool(s) dump completed at 240423 23:13:53 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Note] InnoDB: Shutdown completed; log sequence number 339567; transaction id 298 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Note] mariadbd: Shutdown complete 23:16:23 mariadb | 23:16:23 mariadb | 2024-04-23 23:13:54+00:00 [Note] [Entrypoint]: Temporary server stopped 23:16:23 mariadb | 23:16:23 mariadb | 2024-04-23 23:13:54+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 23:16:23 mariadb | 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Note] InnoDB: Number of transaction pools: 1 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Note] InnoDB: 128 rollback segments are active. 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Note] InnoDB: log sequence number 339567; transaction id 299 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Note] Server socket created on IP: '0.0.0.0'. 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Note] Server socket created on IP: '::'. 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Note] mariadbd: ready for connections. 23:16:23 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 23:16:23 mariadb | 2024-04-23 23:13:54 0 [Note] InnoDB: Buffer pool(s) load completed at 240423 23:13:54 23:16:23 mariadb | 2024-04-23 23:13:55 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) 23:16:23 mariadb | 2024-04-23 23:13:55 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) 23:16:23 mariadb | 2024-04-23 23:13:56 67 [Warning] Aborted connection 67 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) 23:16:23 mariadb | 2024-04-23 23:13:57 113 [Warning] Aborted connection 113 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 23:16:23 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 23:16:23 kafka | replica.lag.time.max.ms = 30000 23:16:23 kafka | replica.selector.class = null 23:16:23 kafka | replica.socket.receive.buffer.bytes = 65536 23:16:23 kafka | replica.socket.timeout.ms = 30000 23:16:23 kafka | replication.quota.window.num = 11 23:16:23 kafka | replication.quota.window.size.seconds = 1 23:16:23 kafka | request.timeout.ms = 30000 23:16:23 kafka | reserved.broker.max.id = 1000 23:16:23 kafka | sasl.client.callback.handler.class = null 23:16:23 kafka | sasl.enabled.mechanisms = [GSSAPI] 23:16:23 kafka | sasl.jaas.config = null 23:16:23 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:23 kafka | sasl.kerberos.min.time.before.relogin = 60000 23:16:23 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 23:16:23 kafka | sasl.kerberos.service.name = null 23:16:23 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:23 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:23 kafka | sasl.login.callback.handler.class = null 23:16:23 kafka | sasl.login.class = null 23:16:23 kafka | sasl.login.connect.timeout.ms = null 23:16:23 kafka | sasl.login.read.timeout.ms = null 23:16:23 kafka | sasl.login.refresh.buffer.seconds = 300 23:16:23 kafka | sasl.login.refresh.min.period.seconds = 60 23:16:23 kafka | sasl.login.refresh.window.factor = 0.8 23:16:23 kafka | sasl.login.refresh.window.jitter = 0.05 23:16:23 kafka | sasl.login.retry.backoff.max.ms = 10000 23:16:23 kafka | sasl.login.retry.backoff.ms = 100 23:16:23 kafka | sasl.mechanism.controller.protocol = GSSAPI 23:16:23 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 23:16:23 kafka | sasl.oauthbearer.clock.skew.seconds = 30 23:16:23 kafka | sasl.oauthbearer.expected.audience = null 23:16:23 kafka | sasl.oauthbearer.expected.issuer = null 23:16:23 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:23 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:23 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:23 kafka | sasl.oauthbearer.jwks.endpoint.url = null 23:16:23 kafka | sasl.oauthbearer.scope.claim.name = scope 23:16:23 kafka | sasl.oauthbearer.sub.claim.name = sub 23:16:23 kafka | sasl.oauthbearer.token.endpoint.url = null 23:16:23 kafka | sasl.server.callback.handler.class = null 23:16:23 kafka | sasl.server.max.receive.size = 524288 23:16:23 kafka | security.inter.broker.protocol = PLAINTEXT 23:16:23 kafka | security.providers = null 23:16:23 kafka | server.max.startup.time.ms = 9223372036854775807 23:16:23 kafka | socket.connection.setup.timeout.max.ms = 30000 23:16:23 kafka | socket.connection.setup.timeout.ms = 10000 23:16:23 kafka | socket.listen.backlog.size = 50 23:16:23 kafka | socket.receive.buffer.bytes = 102400 23:16:23 kafka | socket.request.max.bytes = 104857600 23:16:23 kafka | socket.send.buffer.bytes = 102400 23:16:23 kafka | ssl.cipher.suites = [] 23:16:23 kafka | ssl.client.auth = none 23:16:23 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:23 kafka | ssl.endpoint.identification.algorithm = https 23:16:23 kafka | ssl.engine.factory.class = null 23:16:23 kafka | ssl.key.password = null 23:16:23 kafka | ssl.keymanager.algorithm = SunX509 23:16:23 kafka | ssl.keystore.certificate.chain = null 23:16:23 kafka | ssl.keystore.key = null 23:16:23 kafka | ssl.keystore.location = null 23:16:23 kafka | ssl.keystore.password = null 23:16:23 kafka | ssl.keystore.type = JKS 23:16:23 kafka | ssl.principal.mapping.rules = DEFAULT 23:16:23 kafka | ssl.protocol = TLSv1.3 23:16:23 kafka | ssl.provider = null 23:16:23 kafka | ssl.secure.random.implementation = null 23:16:23 kafka | ssl.trustmanager.algorithm = PKIX 23:16:23 kafka | ssl.truststore.certificates = null 23:16:23 kafka | ssl.truststore.location = null 23:16:23 kafka | ssl.truststore.password = null 23:16:23 kafka | ssl.truststore.type = JKS 23:16:23 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 23:16:23 kafka | transaction.max.timeout.ms = 900000 23:16:23 kafka | transaction.partition.verification.enable = true 23:16:23 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 23:16:23 kafka | transaction.state.log.load.buffer.size = 5242880 23:16:23 kafka | transaction.state.log.min.isr = 2 23:16:23 kafka | transaction.state.log.num.partitions = 50 23:16:23 kafka | transaction.state.log.replication.factor = 3 23:16:23 kafka | transaction.state.log.segment.bytes = 104857600 23:16:23 kafka | transactional.id.expiration.ms = 604800000 23:16:23 kafka | unclean.leader.election.enable = false 23:16:23 kafka | unstable.api.versions.enable = false 23:16:23 kafka | zookeeper.clientCnxnSocket = null 23:16:23 kafka | zookeeper.connect = zookeeper:2181 23:16:23 kafka | zookeeper.connection.timeout.ms = null 23:16:23 kafka | zookeeper.max.in.flight.requests = 10 23:16:23 kafka | zookeeper.metadata.migration.enable = false 23:16:23 kafka | zookeeper.metadata.migration.min.batch.size = 200 23:16:23 kafka | zookeeper.session.timeout.ms = 18000 23:16:23 kafka | zookeeper.set.acl = false 23:16:23 kafka | zookeeper.ssl.cipher.suites = null 23:16:23 kafka | zookeeper.ssl.client.enable = false 23:16:23 kafka | zookeeper.ssl.crl.enable = false 23:16:23 kafka | zookeeper.ssl.enabled.protocols = null 23:16:23 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 23:16:23 kafka | zookeeper.ssl.keystore.location = null 23:16:23 kafka | zookeeper.ssl.keystore.password = null 23:16:23 kafka | zookeeper.ssl.keystore.type = null 23:16:23 kafka | zookeeper.ssl.ocsp.enable = false 23:16:23 kafka | zookeeper.ssl.protocol = TLSv1.2 23:16:23 kafka | zookeeper.ssl.truststore.location = null 23:16:23 kafka | zookeeper.ssl.truststore.password = null 23:16:23 kafka | zookeeper.ssl.truststore.type = null 23:16:23 kafka | (kafka.server.KafkaConfig) 23:16:23 kafka | [2024-04-23 23:13:58,707] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:23 kafka | [2024-04-23 23:13:58,707] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:23 kafka | [2024-04-23 23:13:58,709] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:23 kafka | [2024-04-23 23:13:58,711] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:23 kafka | [2024-04-23 23:13:58,749] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:13:58,759] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:13:58,769] INFO Loaded 0 logs in 19ms (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:13:58,770] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:13:58,771] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:13:58,783] INFO Starting the log cleaner (kafka.log.LogCleaner) 23:16:23 kafka | [2024-04-23 23:13:58,832] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 23:16:23 kafka | [2024-04-23 23:13:58,852] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 23:16:23 kafka | [2024-04-23 23:13:58,883] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 23:16:23 kafka | [2024-04-23 23:13:58,911] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:23 kafka | [2024-04-23 23:13:59,258] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:23 kafka | [2024-04-23 23:13:59,278] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 23:16:23 kafka | [2024-04-23 23:13:59,278] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:23 prometheus | ts=2024-04-23T23:13:53.711Z caller=main.go:573 level=info msg="No time or size retention was set so using the default time retention" duration=15d 23:16:23 prometheus | ts=2024-04-23T23:13:53.712Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.2, branch=HEAD, revision=b4c0ab52c3e9b940ab803581ddae9b3d9a452337)" 23:16:23 prometheus | ts=2024-04-23T23:13:53.712Z caller=main.go:622 level=info build_context="(go=go1.22.2, platform=linux/amd64, user=root@b63f02a423d9, date=20240410-14:05:54, tags=netgo,builtinassets,stringlabels)" 23:16:23 prometheus | ts=2024-04-23T23:13:53.712Z caller=main.go:623 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 23:16:23 prometheus | ts=2024-04-23T23:13:53.712Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 23:16:23 prometheus | ts=2024-04-23T23:13:53.712Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 23:16:23 prometheus | ts=2024-04-23T23:13:53.716Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 23:16:23 prometheus | ts=2024-04-23T23:13:53.717Z caller=main.go:1129 level=info msg="Starting TSDB ..." 23:16:23 prometheus | ts=2024-04-23T23:13:53.719Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 23:16:23 prometheus | ts=2024-04-23T23:13:53.719Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 23:16:23 prometheus | ts=2024-04-23T23:13:53.724Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 23:16:23 prometheus | ts=2024-04-23T23:13:53.724Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.03µs 23:16:23 prometheus | ts=2024-04-23T23:13:53.724Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 23:16:23 prometheus | ts=2024-04-23T23:13:53.725Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 23:16:23 prometheus | ts=2024-04-23T23:13:53.726Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=272.804µs wal_replay_duration=1.503005ms wbl_replay_duration=230ns total_replay_duration=1.80741ms 23:16:23 prometheus | ts=2024-04-23T23:13:53.730Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 23:16:23 prometheus | ts=2024-04-23T23:13:53.730Z caller=main.go:1153 level=info msg="TSDB started" 23:16:23 prometheus | ts=2024-04-23T23:13:53.730Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 23:16:23 prometheus | ts=2024-04-23T23:13:53.732Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.82253ms db_storage=2.1µs remote_storage=2.82µs web_handler=1.42µs query_engine=1.82µs scrape=539.929µs scrape_sd=212.833µs notify=44.221µs notify_sd=15.88µs rules=8.98µs tracing=8.45µs 23:16:23 prometheus | ts=2024-04-23T23:13:53.732Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 23:16:23 prometheus | ts=2024-04-23T23:13:53.732Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 23:16:23 kafka | [2024-04-23 23:13:59,284] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 23:16:23 kafka | [2024-04-23 23:13:59,288] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:23 kafka | [2024-04-23 23:13:59,311] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:23 kafka | [2024-04-23 23:13:59,312] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:23 kafka | [2024-04-23 23:13:59,315] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:23 kafka | [2024-04-23 23:13:59,315] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:23 kafka | [2024-04-23 23:13:59,317] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:23 kafka | [2024-04-23 23:13:59,330] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 23:16:23 kafka | [2024-04-23 23:13:59,331] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 23:16:23 kafka | [2024-04-23 23:13:59,354] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 23:16:23 kafka | [2024-04-23 23:13:59,384] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1713914039372,1713914039372,1,0,0,72057608569815041,258,0,27 23:16:23 kafka | (kafka.zk.KafkaZkClient) 23:16:23 kafka | [2024-04-23 23:13:59,386] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 23:16:23 kafka | [2024-04-23 23:13:59,439] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 23:16:23 kafka | [2024-04-23 23:13:59,444] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:23 kafka | [2024-04-23 23:13:59,450] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:23 kafka | [2024-04-23 23:13:59,451] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:23 kafka | [2024-04-23 23:13:59,464] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:13:59,468] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 23:16:23 kafka | [2024-04-23 23:13:59,472] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:13:59,481] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:13:59,485] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:13:59,489] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 23:16:23 kafka | [2024-04-23 23:13:59,491] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:23 kafka | [2024-04-23 23:13:59,498] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 23:16:23 kafka | [2024-04-23 23:13:59,498] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:23 kafka | [2024-04-23 23:13:59,518] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 23:16:23 kafka | [2024-04-23 23:13:59,518] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:13:59,525] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:13:59,528] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:13:59,531] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:13:59,544] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:23 kafka | [2024-04-23 23:13:59,548] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:13:59,553] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:13:59,558] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 23:16:23 kafka | [2024-04-23 23:13:59,570] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 23:16:23 kafka | [2024-04-23 23:13:59,571] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 23:16:23 kafka | [2024-04-23 23:13:59,573] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:13:59,573] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:13:59,573] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:13:59,573] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:13:59,577] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:13:59,578] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:13:59,578] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:13:59,579] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 23:16:23 kafka | [2024-04-23 23:13:59,580] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:13:59,584] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 23:16:23 kafka | [2024-04-23 23:13:59,588] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 23:16:23 kafka | [2024-04-23 23:13:59,592] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 23:16:23 kafka | [2024-04-23 23:13:59,592] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 23:16:23 kafka | [2024-04-23 23:13:59,593] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:23 kafka | [2024-04-23 23:13:59,597] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:23 kafka | [2024-04-23 23:13:59,597] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 23:16:23 kafka | [2024-04-23 23:13:59,598] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 23:16:23 kafka | [2024-04-23 23:13:59,598] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 23:16:23 kafka | [2024-04-23 23:13:59,602] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 23:16:23 kafka | [2024-04-23 23:13:59,602] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:13:59,602] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 23:16:23 kafka | [2024-04-23 23:13:59,608] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 23:16:23 kafka | [2024-04-23 23:13:59,609] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:13:59,609] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:13:59,610] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:13:59,610] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:13:59,611] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:13:59,616] INFO Kafka version: 7.6.1-ccs (org.apache.kafka.common.utils.AppInfoParser) 23:16:23 kafka | [2024-04-23 23:13:59,616] INFO Kafka commitId: 11e81ad2a49db00b1d2b8c731409cd09e563de67 (org.apache.kafka.common.utils.AppInfoParser) 23:16:23 kafka | [2024-04-23 23:13:59,616] INFO Kafka startTimeMs: 1713914039606 (org.apache.kafka.common.utils.AppInfoParser) 23:16:23 kafka | [2024-04-23 23:13:59,617] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 23:16:23 kafka | [2024-04-23 23:13:59,634] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:13:59,690] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:13:59,694] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:23 kafka | [2024-04-23 23:13:59,724] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:23 kafka | [2024-04-23 23:14:04,636] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:14:04,636] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:14:26,863] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:23 kafka | [2024-04-23 23:14:26,868] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:23 kafka | [2024-04-23 23:14:26,893] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:14:26,901] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:14:26,923] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(yAXDFsmnQuORxbK4D4bccg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(ZbEBaBpuQVOKapUMBCq56A),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:14:26,924] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 23:16:23 kafka | [2024-04-23 23:14:26,926] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,926] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,926] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,926] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,926] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,926] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,926] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:23 policy-apex-pdp | ssl.engine.factory.class = null 23:16:23 policy-apex-pdp | ssl.key.password = null 23:16:23 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:23 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:23 policy-apex-pdp | ssl.keystore.key = null 23:16:23 policy-apex-pdp | ssl.keystore.location = null 23:16:23 policy-apex-pdp | ssl.keystore.password = null 23:16:23 policy-apex-pdp | ssl.keystore.type = JKS 23:16:23 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:23 policy-apex-pdp | ssl.provider = null 23:16:23 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:23 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:23 policy-apex-pdp | ssl.truststore.certificates = null 23:16:23 policy-apex-pdp | ssl.truststore.location = null 23:16:23 policy-apex-pdp | ssl.truststore.password = null 23:16:23 policy-apex-pdp | ssl.truststore.type = JKS 23:16:23 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:23 policy-apex-pdp | 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.077+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.077+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.077+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713914068075 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.079+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-dd2a8f8f-9499-4211-bd29-a21fd7f46681-1, groupId=dd2a8f8f-9499-4211-bd29-a21fd7f46681] Subscribed to topic(s): policy-pdp-pap 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.090+00:00|INFO|ServiceManager|main] service manager starting 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.090+00:00|INFO|ServiceManager|main] service manager starting topics 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.097+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=dd2a8f8f-9499-4211-bd29-a21fd7f46681, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.120+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:23 policy-apex-pdp | allow.auto.create.topics = true 23:16:23 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:23 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:23 policy-apex-pdp | auto.offset.reset = latest 23:16:23 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:23 policy-apex-pdp | check.crcs = true 23:16:23 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:23 policy-apex-pdp | client.id = consumer-dd2a8f8f-9499-4211-bd29-a21fd7f46681-2 23:16:23 policy-apex-pdp | client.rack = 23:16:23 kafka | [2024-04-23 23:14:26,926] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,926] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,926] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.02289415Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.024239052Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.344163ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.045018146Z level=info msg="Executing migration" id="create org table v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.046339357Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.320251ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.049882667Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.051145197Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.26229ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.05436376Z level=info msg="Executing migration" id="create org_user table v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.055157324Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=793.104µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.060439152Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.061733713Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.286891ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.065707668Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.066930769Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.223131ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.070335545Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.071125618Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=790.063µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.074307541Z level=info msg="Executing migration" id="Update org table charset" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.074328951Z level=info msg="Migration successfully executed" id="Update org table charset" duration=22.27µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.078905237Z level=info msg="Executing migration" id="Update org_user table charset" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.078926657Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=22.35µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.082316653Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.082599568Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=357.646µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.085517286Z level=info msg="Executing migration" id="create dashboard table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.086389441Z level=info msg="Migration successfully executed" id="create dashboard table" duration=871.735µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.08933752Z level=info msg="Executing migration" id="add index dashboard.account_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.090262605Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=925.605µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.094929102Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.095935599Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.006327ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.098858477Z level=info msg="Executing migration" id="create dashboard_tag table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.099703042Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=838.725µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.10258871Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.103560615Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=971.485µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.107080843Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.107972628Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=891.405µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.112682106Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.117835321Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.158185ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.121100155Z level=info msg="Executing migration" id="create dashboard v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.121914248Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=815.143µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.126967272Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.127896208Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=928.466µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.131330965Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.132324891Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=993.556µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.135818829Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.136239507Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=420.078µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.139316578Z level=info msg="Executing migration" id="drop table dashboard_v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.140232402Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=915.274µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.145031172Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.145212755Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=177.973µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.148579001Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.150417702Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.83812ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.153803297Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.155701859Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.891222ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.159763876Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.161585346Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.82089ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.164927541Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.165755775Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=827.884µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.16908636Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.17090244Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.8153ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.175226001Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.176108237Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=881.775µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.179533133Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.180470558Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=944.115µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.183875725Z level=info msg="Executing migration" id="Update dashboard table charset" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.183903776Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=28.791µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.188176716Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 23:16:23 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:23 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:23 policy-apex-pdp | enable.auto.commit = true 23:16:23 policy-apex-pdp | exclude.internal.topics = true 23:16:23 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:23 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:23 policy-apex-pdp | fetch.min.bytes = 1 23:16:23 policy-apex-pdp | group.id = dd2a8f8f-9499-4211-bd29-a21fd7f46681 23:16:23 policy-apex-pdp | group.instance.id = null 23:16:23 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:23 policy-apex-pdp | interceptor.classes = [] 23:16:23 policy-apex-pdp | internal.leave.group.on.close = true 23:16:23 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:23 policy-apex-pdp | isolation.level = read_uncommitted 23:16:23 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:23 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:23 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:23 policy-apex-pdp | max.poll.records = 500 23:16:23 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:23 policy-apex-pdp | metric.reporters = [] 23:16:23 policy-apex-pdp | metrics.num.samples = 2 23:16:23 policy-apex-pdp | metrics.recording.level = INFO 23:16:23 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:23 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:23 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:23 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:23 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:23 policy-apex-pdp | request.timeout.ms = 30000 23:16:23 policy-apex-pdp | retry.backoff.ms = 100 23:16:23 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:23 policy-apex-pdp | sasl.jaas.config = null 23:16:23 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:23 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:23 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:23 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:23 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:23 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:23 policy-apex-pdp | sasl.login.class = null 23:16:23 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:23 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:23 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:23 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:23 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,927] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,928] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,928] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,928] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,928] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,928] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,928] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,928] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,928] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,928] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,928] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,928] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,928] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,928] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,928] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,928] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,928] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,928] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.188204057Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=27.831µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.191685474Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.194958569Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.272235ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.198589289Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.20110527Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.513982ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.205468612Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.207599608Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.130496ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.211001164Z level=info msg="Executing migration" id="Add column uid in dashboard" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.213009207Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.004263ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.21621605Z level=info msg="Executing migration" id="Update uid column values in dashboard" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.216448814Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=232.224µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.219680767Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.220502162Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=821.185µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.225435553Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.226222466Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=786.953µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.229604382Z level=info msg="Executing migration" id="Update dashboard title length" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.229631912Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=28.27µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.232211515Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.233010468Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=798.723µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.238769903Z level=info msg="Executing migration" id="create dashboard_provisioning" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.239509195Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=740.722µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.243190977Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.249505142Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=6.320254ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.252754115Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.253482467Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=728.032µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.257829579Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.258660463Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=830.774µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.262841182Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.263815588Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=974.206µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.267471018Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.267909196Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=437.808µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.272533322Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.273150692Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=618.07µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.278232787Z level=info msg="Executing migration" id="Add check_sum column" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.280452574Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.226457ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.283666337Z level=info msg="Executing migration" id="Add index for dashboard_title" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.28447809Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=817.553µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.290009782Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.290257057Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=247.035µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.293992058Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.294436315Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=447.777µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.298307549Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.299835085Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.522225ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.303720199Z level=info msg="Executing migration" id="Add isPublic for dashboard" 23:16:23 kafka | [2024-04-23 23:14:26,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:26,934] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 policy-db-migrator | Waiting for mariadb port 3306... 23:16:23 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:23 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:23 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:23 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:23 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:23 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:23 policy-db-migrator | Connection to mariadb (172.17.0.4) 3306 port [tcp/mysql] succeeded! 23:16:23 policy-db-migrator | 321 blocks 23:16:23 policy-db-migrator | Preparing upgrade release version: 0800 23:16:23 policy-db-migrator | Preparing upgrade release version: 0900 23:16:23 policy-db-migrator | Preparing upgrade release version: 1000 23:16:23 policy-db-migrator | Preparing upgrade release version: 1100 23:16:23 policy-db-migrator | Preparing upgrade release version: 1200 23:16:23 policy-db-migrator | Preparing upgrade release version: 1300 23:16:23 policy-db-migrator | Done 23:16:23 policy-db-migrator | name version 23:16:23 policy-db-migrator | policyadmin 0 23:16:23 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 23:16:23 policy-db-migrator | upgrade: 0 -> 1300 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-pap | Waiting for mariadb port 3306... 23:16:23 policy-pap | mariadb (172.17.0.4:3306) open 23:16:23 policy-pap | Waiting for kafka port 9092... 23:16:23 policy-pap | kafka (172.17.0.8:9092) open 23:16:23 policy-pap | Waiting for api port 6969... 23:16:23 policy-pap | api (172.17.0.7:6969) open 23:16:23 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 23:16:23 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 23:16:23 policy-pap | 23:16:23 policy-pap | . ____ _ __ _ _ 23:16:23 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:23 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:23 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:23 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:23 policy-pap | =========|_|==============|___/=/_/_/_/ 23:16:23 policy-pap | :: Spring Boot :: (v3.1.10) 23:16:23 policy-pap | 23:16:23 policy-pap | [2024-04-23T23:14:17.606+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 23:16:23 policy-pap | [2024-04-23T23:14:17.669+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 30 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 23:16:23 policy-pap | [2024-04-23T23:14:17.670+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:23 policy-pap | [2024-04-23T23:14:19.646+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:23 policy-pap | [2024-04-23T23:14:19.744+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 88 ms. Found 7 JPA repository interfaces. 23:16:23 policy-pap | [2024-04-23T23:14:20.196+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:23 policy-pap | [2024-04-23T23:14:20.197+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:23 policy-pap | [2024-04-23T23:14:20.802+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:23 policy-pap | [2024-04-23T23:14:20.812+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:23 policy-pap | [2024-04-23T23:14:20.814+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:23 policy-pap | [2024-04-23T23:14:20.814+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 23:16:23 policy-pap | [2024-04-23T23:14:20.914+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:23 policy-pap | [2024-04-23T23:14:20.914+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3169 ms 23:16:23 policy-pap | [2024-04-23T23:14:21.322+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:23 policy-pap | [2024-04-23T23:14:21.376+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final 23:16:23 policy-pap | [2024-04-23T23:14:21.743+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:23 policy-pap | [2024-04-23T23:14:21.838+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@72f8ae0c 23:16:23 policy-pap | [2024-04-23T23:14:21.841+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:23 policy-pap | [2024-04-23T23:14:21.869+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect 23:16:23 policy-pap | [2024-04-23T23:14:23.327+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] 23:16:23 policy-pap | [2024-04-23T23:14:23.339+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:23 policy-pap | [2024-04-23T23:14:23.800+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 23:16:23 policy-pap | [2024-04-23T23:14:24.203+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 23:16:23 policy-pap | [2024-04-23T23:14:24.328+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 23:16:23 policy-pap | [2024-04-23T23:14:24.570+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:23 policy-pap | allow.auto.create.topics = true 23:16:23 policy-pap | auto.commit.interval.ms = 5000 23:16:23 policy-pap | auto.include.jmx.reporter = true 23:16:23 policy-pap | auto.offset.reset = latest 23:16:23 policy-pap | bootstrap.servers = [kafka:9092] 23:16:23 policy-pap | check.crcs = true 23:16:23 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:23 policy-pap | client.id = consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-1 23:16:23 policy-pap | client.rack = 23:16:23 policy-pap | connections.max.idle.ms = 540000 23:16:23 policy-pap | default.api.timeout.ms = 60000 23:16:23 policy-pap | enable.auto.commit = true 23:16:23 policy-pap | exclude.internal.topics = true 23:16:23 policy-pap | fetch.max.bytes = 52428800 23:16:23 policy-pap | fetch.max.wait.ms = 500 23:16:23 policy-pap | fetch.min.bytes = 1 23:16:23 policy-pap | group.id = b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 23:16:23 policy-pap | group.instance.id = null 23:16:23 policy-pap | heartbeat.interval.ms = 3000 23:16:23 policy-pap | interceptor.classes = [] 23:16:23 policy-pap | internal.leave.group.on.close = true 23:16:23 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:23 policy-pap | isolation.level = read_uncommitted 23:16:23 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:23 policy-pap | max.partition.fetch.bytes = 1048576 23:16:23 policy-pap | max.poll.interval.ms = 300000 23:16:23 policy-pap | max.poll.records = 500 23:16:23 policy-pap | metadata.max.age.ms = 300000 23:16:23 policy-pap | metric.reporters = [] 23:16:23 policy-pap | metrics.num.samples = 2 23:16:23 policy-pap | metrics.recording.level = INFO 23:16:23 policy-pap | metrics.sample.window.ms = 30000 23:16:23 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:23 policy-pap | receive.buffer.bytes = 65536 23:16:23 policy-pap | reconnect.backoff.max.ms = 1000 23:16:23 policy-pap | reconnect.backoff.ms = 50 23:16:23 policy-pap | request.timeout.ms = 30000 23:16:23 policy-pap | retry.backoff.ms = 100 23:16:23 policy-pap | sasl.client.callback.handler.class = null 23:16:23 policy-pap | sasl.jaas.config = null 23:16:23 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:23 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:23 policy-pap | sasl.kerberos.service.name = null 23:16:23 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:23 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:23 policy-pap | sasl.login.callback.handler.class = null 23:16:23 policy-pap | sasl.login.class = null 23:16:23 policy-pap | sasl.login.connect.timeout.ms = null 23:16:23 policy-pap | sasl.login.read.timeout.ms = null 23:16:23 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:23 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:23 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:23 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:23 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:23 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:23 policy-pap | sasl.mechanism = GSSAPI 23:16:23 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:23 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:23 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:23 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:23 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:23 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:23 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:23 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:23 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:23 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:23 policy-pap | security.protocol = PLAINTEXT 23:16:23 policy-pap | security.providers = null 23:16:23 policy-pap | send.buffer.bytes = 131072 23:16:23 policy-pap | session.timeout.ms = 45000 23:16:23 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:23 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:23 policy-pap | ssl.cipher.suites = null 23:16:23 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:23 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:23 policy-pap | ssl.engine.factory.class = null 23:16:23 policy-pap | ssl.key.password = null 23:16:23 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:23 policy-pap | ssl.keystore.certificate.chain = null 23:16:23 policy-pap | ssl.keystore.key = null 23:16:23 policy-pap | ssl.keystore.location = null 23:16:23 policy-pap | ssl.keystore.password = null 23:16:23 policy-pap | ssl.keystore.type = JKS 23:16:23 policy-pap | ssl.protocol = TLSv1.3 23:16:23 policy-pap | ssl.provider = null 23:16:23 policy-pap | ssl.secure.random.implementation = null 23:16:23 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:23 policy-pap | ssl.truststore.certificates = null 23:16:23 policy-pap | ssl.truststore.location = null 23:16:23 policy-pap | ssl.truststore.password = null 23:16:23 policy-pap | ssl.truststore.type = JKS 23:16:23 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:23 policy-pap | 23:16:23 policy-pap | [2024-04-23T23:14:24.721+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:23 policy-pap | [2024-04-23T23:14:24.721+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:23 policy-pap | [2024-04-23T23:14:24.721+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713914064719 23:16:23 policy-pap | [2024-04-23T23:14:24.723+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-1, groupId=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6] Subscribed to topic(s): policy-pdp-pap 23:16:23 policy-pap | [2024-04-23T23:14:24.724+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:23 policy-pap | allow.auto.create.topics = true 23:16:23 policy-pap | auto.commit.interval.ms = 5000 23:16:23 policy-pap | auto.include.jmx.reporter = true 23:16:23 policy-pap | auto.offset.reset = latest 23:16:23 policy-pap | bootstrap.servers = [kafka:9092] 23:16:23 policy-pap | check.crcs = true 23:16:23 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:23 policy-pap | client.id = consumer-policy-pap-2 23:16:23 policy-pap | client.rack = 23:16:23 policy-pap | connections.max.idle.ms = 540000 23:16:23 policy-pap | default.api.timeout.ms = 60000 23:16:23 policy-pap | enable.auto.commit = true 23:16:23 policy-pap | exclude.internal.topics = true 23:16:23 policy-pap | fetch.max.bytes = 52428800 23:16:23 policy-pap | fetch.max.wait.ms = 500 23:16:23 policy-pap | fetch.min.bytes = 1 23:16:23 policy-pap | group.id = policy-pap 23:16:23 policy-pap | group.instance.id = null 23:16:23 policy-pap | heartbeat.interval.ms = 3000 23:16:23 policy-pap | interceptor.classes = [] 23:16:23 policy-pap | internal.leave.group.on.close = true 23:16:23 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:23 policy-pap | isolation.level = read_uncommitted 23:16:23 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:23 policy-pap | max.partition.fetch.bytes = 1048576 23:16:23 policy-pap | max.poll.interval.ms = 300000 23:16:23 policy-pap | max.poll.records = 500 23:16:23 policy-pap | metadata.max.age.ms = 300000 23:16:23 policy-pap | metric.reporters = [] 23:16:23 policy-pap | metrics.num.samples = 2 23:16:23 policy-pap | metrics.recording.level = INFO 23:16:23 policy-pap | metrics.sample.window.ms = 30000 23:16:23 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:23 policy-pap | receive.buffer.bytes = 65536 23:16:23 policy-pap | reconnect.backoff.max.ms = 1000 23:16:23 policy-pap | reconnect.backoff.ms = 50 23:16:23 policy-pap | request.timeout.ms = 30000 23:16:23 policy-pap | retry.backoff.ms = 100 23:16:23 policy-pap | sasl.client.callback.handler.class = null 23:16:23 policy-pap | sasl.jaas.config = null 23:16:23 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:23 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:23 policy-pap | sasl.kerberos.service.name = null 23:16:23 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:23 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:23 policy-pap | sasl.login.callback.handler.class = null 23:16:23 policy-pap | sasl.login.class = null 23:16:23 policy-pap | sasl.login.connect.timeout.ms = null 23:16:23 policy-pap | sasl.login.read.timeout.ms = null 23:16:23 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:23 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:23 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:23 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:23 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:23 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:23 policy-pap | sasl.mechanism = GSSAPI 23:16:23 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:23 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:23 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:23 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:23 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:23 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:23 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:23 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:23 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:23 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:23 policy-pap | security.protocol = PLAINTEXT 23:16:23 policy-pap | security.providers = null 23:16:23 policy-pap | send.buffer.bytes = 131072 23:16:23 policy-pap | session.timeout.ms = 45000 23:16:23 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:23 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:23 policy-pap | ssl.cipher.suites = null 23:16:23 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:23 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:23 policy-pap | ssl.engine.factory.class = null 23:16:23 policy-pap | ssl.key.password = null 23:16:23 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:23 policy-pap | ssl.keystore.certificate.chain = null 23:16:23 policy-pap | ssl.keystore.key = null 23:16:23 policy-pap | ssl.keystore.location = null 23:16:23 policy-pap | ssl.keystore.password = null 23:16:23 policy-pap | ssl.keystore.type = JKS 23:16:23 policy-pap | ssl.protocol = TLSv1.3 23:16:23 policy-pap | ssl.provider = null 23:16:23 policy-pap | ssl.secure.random.implementation = null 23:16:23 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:23 policy-pap | ssl.truststore.certificates = null 23:16:23 policy-pap | ssl.truststore.location = null 23:16:23 policy-pap | ssl.truststore.password = null 23:16:23 policy-pap | ssl.truststore.type = JKS 23:16:23 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:23 policy-pap | 23:16:23 policy-pap | [2024-04-23T23:14:24.730+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:23 policy-pap | [2024-04-23T23:14:24.730+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:23 policy-pap | [2024-04-23T23:14:24.730+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713914064730 23:16:23 policy-pap | [2024-04-23T23:14:24.730+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:23 policy-pap | [2024-04-23T23:14:25.028+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.306005627Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.284499ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.310331069Z level=info msg="Executing migration" id="create data_source table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.311247954Z level=info msg="Migration successfully executed" id="create data_source table" duration=917.725µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.315975062Z level=info msg="Executing migration" id="add index data_source.account_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.317000749Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.033117ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.320614168Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.321586135Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=972.667µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.326442965Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 23:16:23 kafka | [2024-04-23 23:14:27,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,126] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:23 simulator | overriding logback.xml 23:16:23 simulator | 2024-04-23 23:13:56,241 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:23 simulator | 2024-04-23 23:13:56,304 INFO org.onap.policy.models.simulators starting 23:16:23 simulator | 2024-04-23 23:13:56,305 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 23:16:23 simulator | 2024-04-23 23:13:56,504 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 23:16:23 simulator | 2024-04-23 23:13:56,508 INFO org.onap.policy.models.simulators starting A&AI simulator 23:16:23 simulator | 2024-04-23 23:13:56,654 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:23 simulator | 2024-04-23 23:13:56,668 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:23 simulator | 2024-04-23 23:13:56,686 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:23 simulator | 2024-04-23 23:13:56,690 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:23 simulator | 2024-04-23 23:13:56,762 INFO Session workerName=node0 23:16:23 simulator | 2024-04-23 23:13:57,454 INFO Using GSON for REST calls 23:16:23 simulator | 2024-04-23 23:13:57,555 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} 23:16:23 simulator | 2024-04-23 23:13:57,564 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 23:16:23 simulator | 2024-04-23 23:13:57,571 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1812ms 23:16:23 simulator | 2024-04-23 23:13:57,571 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4112 ms. 23:16:23 simulator | 2024-04-23 23:13:57,575 INFO org.onap.policy.models.simulators starting SDNC simulator 23:16:23 simulator | 2024-04-23 23:13:57,577 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:23 simulator | 2024-04-23 23:13:57,578 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:23 simulator | 2024-04-23 23:13:57,585 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:23 simulator | 2024-04-23 23:13:57,586 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:23 simulator | 2024-04-23 23:13:57,596 INFO Session workerName=node0 23:16:23 simulator | 2024-04-23 23:13:57,714 INFO Using GSON for REST calls 23:16:23 simulator | 2024-04-23 23:13:57,737 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} 23:16:23 simulator | 2024-04-23 23:13:57,739 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 23:16:23 simulator | 2024-04-23 23:13:57,741 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @1982ms 23:16:23 policy-pap | [2024-04-23T23:14:25.184+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:23 policy-pap | [2024-04-23T23:14:25.412+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@30437e9c, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@2e057637, org.springframework.security.web.context.SecurityContextHolderFilter@1870b9b8, org.springframework.security.web.header.HeaderWriterFilter@5ae16aa, org.springframework.security.web.authentication.logout.LogoutFilter@5d98364c, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@6b630d4b, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@9825465, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@2befb16f, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@762f8ff6, org.springframework.security.web.access.ExceptionTranslationFilter@5ffdd510, org.springframework.security.web.access.intercept.AuthorizationFilter@6fc6f68f] 23:16:23 policy-pap | [2024-04-23T23:14:26.170+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:23 policy-pap | [2024-04-23T23:14:26.274+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:23 policy-pap | [2024-04-23T23:14:26.289+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 23:16:23 policy-pap | [2024-04-23T23:14:26.308+00:00|INFO|ServiceManager|main] Policy PAP starting 23:16:23 policy-pap | [2024-04-23T23:14:26.308+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 23:16:23 policy-pap | [2024-04-23T23:14:26.309+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 23:16:23 policy-pap | [2024-04-23T23:14:26.309+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 23:16:23 policy-pap | [2024-04-23T23:14:26.309+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 23:16:23 policy-pap | [2024-04-23T23:14:26.310+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 23:16:23 policy-pap | [2024-04-23T23:14:26.310+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 23:16:23 policy-pap | [2024-04-23T23:14:26.312+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@282aea3c 23:16:23 policy-pap | [2024-04-23T23:14:26.324+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:23 policy-pap | [2024-04-23T23:14:26.324+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:23 policy-pap | allow.auto.create.topics = true 23:16:23 policy-pap | auto.commit.interval.ms = 5000 23:16:23 policy-pap | auto.include.jmx.reporter = true 23:16:23 policy-pap | auto.offset.reset = latest 23:16:23 policy-pap | bootstrap.servers = [kafka:9092] 23:16:23 policy-pap | check.crcs = true 23:16:23 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:23 policy-pap | client.id = consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3 23:16:23 policy-pap | client.rack = 23:16:23 policy-pap | connections.max.idle.ms = 540000 23:16:23 policy-pap | default.api.timeout.ms = 60000 23:16:23 policy-pap | enable.auto.commit = true 23:16:23 policy-pap | exclude.internal.topics = true 23:16:23 policy-pap | fetch.max.bytes = 52428800 23:16:23 policy-pap | fetch.max.wait.ms = 500 23:16:23 policy-pap | fetch.min.bytes = 1 23:16:23 policy-pap | group.id = b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6 23:16:23 policy-pap | group.instance.id = null 23:16:23 policy-pap | heartbeat.interval.ms = 3000 23:16:23 policy-pap | interceptor.classes = [] 23:16:23 policy-pap | internal.leave.group.on.close = true 23:16:23 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:23 policy-pap | isolation.level = read_uncommitted 23:16:23 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:23 policy-pap | max.partition.fetch.bytes = 1048576 23:16:23 policy-pap | max.poll.interval.ms = 300000 23:16:23 policy-pap | max.poll.records = 500 23:16:23 policy-pap | metadata.max.age.ms = 300000 23:16:23 policy-pap | metric.reporters = [] 23:16:23 policy-pap | metrics.num.samples = 2 23:16:23 policy-pap | metrics.recording.level = INFO 23:16:23 policy-pap | metrics.sample.window.ms = 30000 23:16:23 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:23 policy-pap | receive.buffer.bytes = 65536 23:16:23 policy-pap | reconnect.backoff.max.ms = 1000 23:16:23 policy-pap | reconnect.backoff.ms = 50 23:16:23 policy-pap | request.timeout.ms = 30000 23:16:23 policy-pap | retry.backoff.ms = 100 23:16:23 policy-pap | sasl.client.callback.handler.class = null 23:16:23 policy-pap | sasl.jaas.config = null 23:16:23 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:23 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:23 policy-pap | sasl.kerberos.service.name = null 23:16:23 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:23 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:23 policy-pap | sasl.login.callback.handler.class = null 23:16:23 policy-pap | sasl.login.class = null 23:16:23 policy-pap | sasl.login.connect.timeout.ms = null 23:16:23 policy-pap | sasl.login.read.timeout.ms = null 23:16:23 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:23 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:23 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:23 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:23 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:23 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:23 policy-pap | sasl.mechanism = GSSAPI 23:16:23 simulator | 2024-04-23 23:13:57,741 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4840 ms. 23:16:23 simulator | 2024-04-23 23:13:57,764 INFO org.onap.policy.models.simulators starting SO simulator 23:16:23 simulator | 2024-04-23 23:13:57,766 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:23 simulator | 2024-04-23 23:13:57,767 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:23 simulator | 2024-04-23 23:13:57,772 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:23 simulator | 2024-04-23 23:13:57,773 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:23 simulator | 2024-04-23 23:13:57,780 INFO Session workerName=node0 23:16:23 simulator | 2024-04-23 23:13:57,865 INFO Using GSON for REST calls 23:16:23 simulator | 2024-04-23 23:13:57,884 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} 23:16:23 simulator | 2024-04-23 23:13:57,885 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 23:16:23 simulator | 2024-04-23 23:13:57,885 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @2126ms 23:16:23 simulator | 2024-04-23 23:13:57,886 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4886 ms. 23:16:23 simulator | 2024-04-23 23:13:57,887 INFO org.onap.policy.models.simulators starting VFC simulator 23:16:23 simulator | 2024-04-23 23:13:57,894 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:23 simulator | 2024-04-23 23:13:57,895 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:23 simulator | 2024-04-23 23:13:57,896 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:23 simulator | 2024-04-23 23:13:57,897 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:23 simulator | 2024-04-23 23:13:57,900 INFO Session workerName=node0 23:16:23 simulator | 2024-04-23 23:13:57,942 INFO Using GSON for REST calls 23:16:23 simulator | 2024-04-23 23:13:57,951 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} 23:16:23 simulator | 2024-04-23 23:13:57,953 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 23:16:23 simulator | 2024-04-23 23:13:57,953 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @2194ms 23:16:23 simulator | 2024-04-23 23:13:57,953 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4943 ms. 23:16:23 simulator | 2024-04-23 23:13:57,954 INFO org.onap.policy.models.simulators started 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.32730821Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=865.195µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.330456412Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.331148983Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=693.041µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.335824661Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.342238636Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.413356ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.348211985Z level=info msg="Executing migration" id="create data_source table v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.349201992Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=989.827µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.35267918Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.353529864Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=848.663µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.357637832Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.359061425Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.423473ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.362780917Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.363359027Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=577.851µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.367854381Z level=info msg="Executing migration" id="Add column with_credentials" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.370179209Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.324018ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.374486701Z level=info msg="Executing migration" id="Add secure json data column" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.376861039Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.376548ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.412857895Z level=info msg="Executing migration" id="Update data_source table charset" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.412902776Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=47.751µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.416725Z level=info msg="Executing migration" id="Update initial version to 1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.417049965Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=324.955µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.420823998Z level=info msg="Executing migration" id="Add read_only data column" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.423096285Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.272247ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.427171722Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.427352095Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=180.433µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.430271605Z level=info msg="Executing migration" id="Update json_data with nulls" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.430431937Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=160.542µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.433511468Z level=info msg="Executing migration" id="Add uid column" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.436796972Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.284784ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.441414139Z level=info msg="Executing migration" id="Update uid value" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.441626252Z level=info msg="Migration successfully executed" id="Update uid value" duration=215.133µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.444041722Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.444805825Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=763.634µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.447786134Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.448640628Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=846.564µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.453644511Z level=info msg="Executing migration" id="create api_key table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.454241331Z level=info msg="Migration successfully executed" id="create api_key table" duration=596.07µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.458320949Z level=info msg="Executing migration" id="add index api_key.account_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.458895978Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=575.359µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.462832113Z level=info msg="Executing migration" id="add index api_key.key" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.463588955Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=752.442µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.469138147Z level=info msg="Executing migration" id="add index api_key.account_id_name" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.469936951Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=798.714µs 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.47347846Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.474558847Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.080318ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.481312129Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.48196795Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=655.911µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.484852888Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.485543049Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=690.141µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.490184626Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.49709414Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=6.909294ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.502752924Z level=info msg="Executing migration" id="create api_key table v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.503294044Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=540.92µs 23:16:23 kafka | [2024-04-23 23:14:27,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0450-pdpgroup.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0470-pdp.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:23 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:23 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:23 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:23 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:23 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:23 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:23 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:23 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:23 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:23 policy-pap | security.protocol = PLAINTEXT 23:16:23 policy-pap | security.providers = null 23:16:23 policy-pap | send.buffer.bytes = 131072 23:16:23 policy-pap | session.timeout.ms = 45000 23:16:23 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:23 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:23 policy-pap | ssl.cipher.suites = null 23:16:23 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:23 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:23 policy-pap | ssl.engine.factory.class = null 23:16:23 policy-pap | ssl.key.password = null 23:16:23 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:23 policy-pap | ssl.keystore.certificate.chain = null 23:16:23 policy-pap | ssl.keystore.key = null 23:16:23 policy-pap | ssl.keystore.location = null 23:16:23 policy-pap | ssl.keystore.password = null 23:16:23 policy-pap | ssl.keystore.type = JKS 23:16:23 policy-pap | ssl.protocol = TLSv1.3 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.506370735Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.506902783Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=532.759µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.509678689Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.510431971Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=752.822µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.515606497Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.516273368Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=669.881µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.520509938Z level=info msg="Executing migration" id="copy api_key v1 to v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.520746812Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=237.404µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.523567899Z level=info msg="Executing migration" id="Drop old table api_key_v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.523953885Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=385.356µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.529220502Z level=info msg="Executing migration" id="Update api_key table charset" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.529243883Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=24.171µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.532254663Z level=info msg="Executing migration" id="Add expires to api_key table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.536434691Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=4.179368ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.540226595Z level=info msg="Executing migration" id="Add service account foreign key" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.54416129Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=3.935135ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.549721341Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.550031936Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=310.145µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.553459154Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.556013785Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.553721ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.560017572Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.562599235Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.579173ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.568533293Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.569344397Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=810.803µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.574053485Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.574869958Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=814.013µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.579138089Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.580049824Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=912.125µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.58343574Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.584253723Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=817.504µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.590754721Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.592195605Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.440513ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.598790314Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.600179127Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.388793ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.603819537Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.604131483Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=310.616µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.609840838Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.609870288Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=31.17µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.613163582Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.615941388Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.777096ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.620640815Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.623579845Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.93827ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.627177514Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.627244675Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=68.001µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.630830354Z level=info msg="Executing migration" id="create quota table v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.631719089Z level=info msg="Migration successfully executed" id="create quota table v1" duration=886.185µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.638682905Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.63960802Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=922.555µs 23:16:23 policy-pap | ssl.provider = null 23:16:23 policy-pap | ssl.secure.random.implementation = null 23:16:23 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:23 policy-pap | ssl.truststore.certificates = null 23:16:23 policy-pap | ssl.truststore.location = null 23:16:23 policy-pap | ssl.truststore.password = null 23:16:23 policy-pap | ssl.truststore.type = JKS 23:16:23 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:23 policy-pap | 23:16:23 policy-pap | [2024-04-23T23:14:26.330+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:23 policy-pap | [2024-04-23T23:14:26.331+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:23 policy-pap | [2024-04-23T23:14:26.331+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713914066330 23:16:23 policy-pap | [2024-04-23T23:14:26.331+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3, groupId=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6] Subscribed to topic(s): policy-pdp-pap 23:16:23 policy-pap | [2024-04-23T23:14:26.331+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 23:16:23 policy-pap | [2024-04-23T23:14:26.331+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=14538c28-db4c-4d7b-9a85-d5f5bce15e3c, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@430f0c63 23:16:23 policy-pap | [2024-04-23T23:14:26.331+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=14538c28-db4c-4d7b-9a85-d5f5bce15e3c, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:23 policy-pap | [2024-04-23T23:14:26.332+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:23 policy-pap | allow.auto.create.topics = true 23:16:23 policy-pap | auto.commit.interval.ms = 5000 23:16:23 policy-pap | auto.include.jmx.reporter = true 23:16:23 policy-pap | auto.offset.reset = latest 23:16:23 policy-pap | bootstrap.servers = [kafka:9092] 23:16:23 policy-pap | check.crcs = true 23:16:23 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:23 policy-pap | client.id = consumer-policy-pap-4 23:16:23 policy-pap | client.rack = 23:16:23 policy-pap | connections.max.idle.ms = 540000 23:16:23 policy-pap | default.api.timeout.ms = 60000 23:16:23 policy-pap | enable.auto.commit = true 23:16:23 policy-pap | exclude.internal.topics = true 23:16:23 policy-pap | fetch.max.bytes = 52428800 23:16:23 policy-pap | fetch.max.wait.ms = 500 23:16:23 policy-pap | fetch.min.bytes = 1 23:16:23 policy-pap | group.id = policy-pap 23:16:23 policy-pap | group.instance.id = null 23:16:23 policy-pap | heartbeat.interval.ms = 3000 23:16:23 policy-pap | interceptor.classes = [] 23:16:23 policy-pap | internal.leave.group.on.close = true 23:16:23 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:23 policy-pap | isolation.level = read_uncommitted 23:16:23 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:23 policy-pap | max.partition.fetch.bytes = 1048576 23:16:23 policy-pap | max.poll.interval.ms = 300000 23:16:23 policy-pap | max.poll.records = 500 23:16:23 policy-pap | metadata.max.age.ms = 300000 23:16:23 policy-pap | metric.reporters = [] 23:16:23 policy-pap | metrics.num.samples = 2 23:16:23 policy-pap | metrics.recording.level = INFO 23:16:23 policy-pap | metrics.sample.window.ms = 30000 23:16:23 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:23 policy-pap | receive.buffer.bytes = 65536 23:16:23 policy-pap | reconnect.backoff.max.ms = 1000 23:16:23 policy-pap | reconnect.backoff.ms = 50 23:16:23 policy-pap | request.timeout.ms = 30000 23:16:23 policy-pap | retry.backoff.ms = 100 23:16:23 policy-pap | sasl.client.callback.handler.class = null 23:16:23 policy-pap | sasl.jaas.config = null 23:16:23 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:23 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:23 policy-pap | sasl.kerberos.service.name = null 23:16:23 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:23 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:23 policy-pap | sasl.login.callback.handler.class = null 23:16:23 policy-pap | sasl.login.class = null 23:16:23 policy-pap | sasl.login.connect.timeout.ms = null 23:16:23 policy-pap | sasl.login.read.timeout.ms = null 23:16:23 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:23 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:23 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:23 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:23 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:23 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-pap | sasl.mechanism = GSSAPI 23:16:23 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:23 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:23 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:23 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:23 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:23 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:23 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:23 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:23 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:23 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:23 policy-pap | security.protocol = PLAINTEXT 23:16:23 policy-pap | security.providers = null 23:16:23 policy-pap | send.buffer.bytes = 131072 23:16:23 policy-pap | session.timeout.ms = 45000 23:16:23 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:23 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:23 policy-pap | ssl.cipher.suites = null 23:16:23 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:23 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:23 policy-pap | ssl.engine.factory.class = null 23:16:23 policy-pap | ssl.key.password = null 23:16:23 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:23 policy-pap | ssl.keystore.certificate.chain = null 23:16:23 policy-pap | ssl.keystore.key = null 23:16:23 policy-pap | ssl.keystore.location = null 23:16:23 policy-pap | ssl.keystore.password = null 23:16:23 policy-pap | ssl.keystore.type = JKS 23:16:23 policy-pap | ssl.protocol = TLSv1.3 23:16:23 policy-pap | ssl.provider = null 23:16:23 policy-pap | ssl.secure.random.implementation = null 23:16:23 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:23 policy-pap | ssl.truststore.certificates = null 23:16:23 policy-pap | ssl.truststore.location = null 23:16:23 policy-pap | ssl.truststore.password = null 23:16:23 policy-pap | ssl.truststore.type = JKS 23:16:23 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:23 policy-pap | 23:16:23 policy-pap | [2024-04-23T23:14:26.337+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:23 policy-pap | [2024-04-23T23:14:26.337+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:23 policy-pap | [2024-04-23T23:14:26.337+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713914066336 23:16:23 policy-pap | [2024-04-23T23:14:26.337+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:23 policy-pap | [2024-04-23T23:14:26.337+00:00|INFO|ServiceManager|main] Policy PAP starting topics 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0570-toscadatatype.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0630-toscanodetype.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-pap | [2024-04-23T23:14:26.337+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=14538c28-db4c-4d7b-9a85-d5f5bce15e3c, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:23 policy-pap | [2024-04-23T23:14:26.337+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:23 policy-pap | [2024-04-23T23:14:26.337+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e57c6980-1818-42f6-9f2b-b325adf74916, alive=false, publisher=null]]: starting 23:16:23 policy-pap | [2024-04-23T23:14:26.353+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:23 policy-pap | acks = -1 23:16:23 policy-pap | auto.include.jmx.reporter = true 23:16:23 policy-pap | batch.size = 16384 23:16:23 policy-pap | bootstrap.servers = [kafka:9092] 23:16:23 policy-pap | buffer.memory = 33554432 23:16:23 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:23 policy-pap | client.id = producer-1 23:16:23 policy-pap | compression.type = none 23:16:23 policy-pap | connections.max.idle.ms = 540000 23:16:23 policy-pap | delivery.timeout.ms = 120000 23:16:23 policy-pap | enable.idempotence = true 23:16:23 policy-pap | interceptor.classes = [] 23:16:23 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:23 policy-pap | linger.ms = 0 23:16:23 policy-pap | max.block.ms = 60000 23:16:23 policy-pap | max.in.flight.requests.per.connection = 5 23:16:23 policy-pap | max.request.size = 1048576 23:16:23 policy-pap | metadata.max.age.ms = 300000 23:16:23 policy-pap | metadata.max.idle.ms = 300000 23:16:23 policy-pap | metric.reporters = [] 23:16:23 policy-pap | metrics.num.samples = 2 23:16:23 policy-pap | metrics.recording.level = INFO 23:16:23 policy-pap | metrics.sample.window.ms = 30000 23:16:23 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:23 policy-pap | partitioner.availability.timeout.ms = 0 23:16:23 policy-pap | partitioner.class = null 23:16:23 policy-pap | partitioner.ignore.keys = false 23:16:23 policy-pap | receive.buffer.bytes = 32768 23:16:23 policy-pap | reconnect.backoff.max.ms = 1000 23:16:23 policy-pap | reconnect.backoff.ms = 50 23:16:23 policy-pap | request.timeout.ms = 30000 23:16:23 policy-pap | retries = 2147483647 23:16:23 policy-pap | retry.backoff.ms = 100 23:16:23 policy-pap | sasl.client.callback.handler.class = null 23:16:23 policy-pap | sasl.jaas.config = null 23:16:23 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:23 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:23 policy-pap | sasl.kerberos.service.name = null 23:16:23 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:23 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:23 policy-pap | sasl.login.callback.handler.class = null 23:16:23 policy-pap | sasl.login.class = null 23:16:23 policy-pap | sasl.login.connect.timeout.ms = null 23:16:23 policy-pap | sasl.login.read.timeout.ms = null 23:16:23 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:23 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:23 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:23 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.643341102Z level=info msg="Executing migration" id="Update quota table charset" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.643380353Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=40.241µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.647101614Z level=info msg="Executing migration" id="create plugin_setting table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.648555508Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.452494ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.652465063Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.653397308Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=931.815µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.658211258Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.662518439Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.305291ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.666292782Z level=info msg="Executing migration" id="Update plugin_setting table charset" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.666339093Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=46.311µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.670124315Z level=info msg="Executing migration" id="create session table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.6710698Z level=info msg="Migration successfully executed" id="create session table" duration=947.695µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.676068733Z level=info msg="Executing migration" id="Drop old table playlist table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.676156895Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=88.252µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.679910008Z level=info msg="Executing migration" id="Drop old table playlist_item table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.679997739Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=90.201µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.684212658Z level=info msg="Executing migration" id="create playlist table v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.685389118Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.17519ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.691092692Z level=info msg="Executing migration" id="create playlist item table v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.692264231Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.171509ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.696312778Z level=info msg="Executing migration" id="Update playlist table charset" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.69634107Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=28.711µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.699875848Z level=info msg="Executing migration" id="Update playlist_item table charset" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.699910208Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=31.72µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.703902714Z level=info msg="Executing migration" id="Add playlist column created_at" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.707467413Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.564669ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.712033319Z level=info msg="Executing migration" id="Add playlist column updated_at" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.718051238Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=6.016899ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.72294766Z level=info msg="Executing migration" id="drop preferences table v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.723031891Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=86.081µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.757818897Z level=info msg="Executing migration" id="drop preferences table v3" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.758039511Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=227.954µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.765920971Z level=info msg="Executing migration" id="create preferences table v3" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.767603909Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.691639ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.772418319Z level=info msg="Executing migration" id="Update preferences table charset" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.772447139Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=29.83µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.775771954Z level=info msg="Executing migration" id="Add column team_id in preferences" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.778891916Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.124232ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.78217862Z level=info msg="Executing migration" id="Update team_id column values in preferences" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.782365363Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=188.673µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.785626307Z level=info msg="Executing migration" id="Add column week_start in preferences" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.788806651Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.179514ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.793433707Z level=info msg="Executing migration" id="Add column preferences.json_data" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.796962105Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.527778ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.802702409Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.802882413Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=180.154µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.806526663Z level=info msg="Executing migration" id="Add preferences index org_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.807727133Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.19934ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.810874005Z level=info msg="Executing migration" id="Add preferences index user_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.811850762Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=980.047µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.817720839Z level=info msg="Executing migration" id="create alert table v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.818943379Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.22235ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.822368665Z level=info msg="Executing migration" id="add index alert org_id & id " 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.823330052Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=964.677µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.827132364Z level=info msg="Executing migration" id="add index alert state" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.828611599Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.478745ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.833809565Z level=info msg="Executing migration" id="add index alert dashboard_id" 23:16:23 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:23 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:23 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:23 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:23 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:23 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:23 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:23 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:23 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:23 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:23 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:23 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:23 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:23 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:23 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:23 policy-apex-pdp | security.providers = null 23:16:23 policy-apex-pdp | send.buffer.bytes = 131072 23:16:23 policy-apex-pdp | session.timeout.ms = 45000 23:16:23 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:23 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:23 policy-apex-pdp | ssl.cipher.suites = null 23:16:23 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:23 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:23 policy-apex-pdp | ssl.engine.factory.class = null 23:16:23 policy-apex-pdp | ssl.key.password = null 23:16:23 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:23 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:23 policy-apex-pdp | ssl.keystore.key = null 23:16:23 policy-apex-pdp | ssl.keystore.location = null 23:16:23 policy-apex-pdp | ssl.keystore.password = null 23:16:23 policy-apex-pdp | ssl.keystore.type = JKS 23:16:23 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:23 policy-apex-pdp | ssl.provider = null 23:16:23 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:23 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:23 policy-apex-pdp | ssl.truststore.certificates = null 23:16:23 policy-apex-pdp | ssl.truststore.location = null 23:16:23 policy-apex-pdp | ssl.truststore.password = null 23:16:23 policy-apex-pdp | ssl.truststore.type = JKS 23:16:23 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:23 policy-apex-pdp | 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.128+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.128+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.128+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713914068128 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.128+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-dd2a8f8f-9499-4211-bd29-a21fd7f46681-2, groupId=dd2a8f8f-9499-4211-bd29-a21fd7f46681] Subscribed to topic(s): policy-pdp-pap 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.129+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=2fbe19b2-254b-4c33-bda8-e44fc90c12a2, alive=false, publisher=null]]: starting 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.140+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:23 policy-apex-pdp | acks = -1 23:16:23 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:23 policy-apex-pdp | batch.size = 16384 23:16:23 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:23 policy-apex-pdp | buffer.memory = 33554432 23:16:23 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:23 policy-apex-pdp | client.id = producer-1 23:16:23 policy-apex-pdp | compression.type = none 23:16:23 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:23 policy-apex-pdp | delivery.timeout.ms = 120000 23:16:23 policy-apex-pdp | enable.idempotence = true 23:16:23 policy-apex-pdp | interceptor.classes = [] 23:16:23 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:23 policy-apex-pdp | linger.ms = 0 23:16:23 policy-apex-pdp | max.block.ms = 60000 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.834843332Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.033437ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.838622894Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.839377418Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=753.833µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.8425684Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.843523466Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=955.036µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.849145549Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.850494382Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.351423ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.854474187Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.866773141Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=12.301954ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.871078562Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.871701702Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=622.53µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.876908059Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.878379752Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.469654ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.882232277Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.882797966Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=565.309µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.886113101Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.88668397Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=569.969µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.895337254Z level=info msg="Executing migration" id="create alert_notification table v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.896872609Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.534895ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.901542417Z level=info msg="Executing migration" id="Add column is_default" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.906232284Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.686396ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.90958832Z level=info msg="Executing migration" id="Add column frequency" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.9132271Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.63804ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.918191452Z level=info msg="Executing migration" id="Add column send_reminder" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.921882983Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.691561ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.925840878Z level=info msg="Executing migration" id="Add column disable_resolve_message" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.929728503Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.886765ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.935342986Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.936321222Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=978.276µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.946939538Z level=info msg="Executing migration" id="Update alert table charset" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.946986319Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=49.651µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.950665899Z level=info msg="Executing migration" id="Update alert_notification table charset" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.95069388Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=27.431µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.953950963Z level=info msg="Executing migration" id="create notification_journal table v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.954721427Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=769.984µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.960174387Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.961195044Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.024407ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.966213617Z level=info msg="Executing migration" id="drop alert_notification_journal" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.967124602Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=910.305µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.970807223Z level=info msg="Executing migration" id="create alert_notification_state table v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.97178626Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=978.457µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.975333648Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.976456827Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.122588ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.982221743Z level=info msg="Executing migration" id="Add for to alert table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.986221678Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.001635ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.990250504Z level=info msg="Executing migration" id="Add column uid in alert_notification" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.994206031Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.956447ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.997490995Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:55.997752129Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=260.474µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.002534349Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.003503824Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=949.385µs 23:16:23 policy-apex-pdp | max.in.flight.requests.per.connection = 5 23:16:23 policy-apex-pdp | max.request.size = 1048576 23:16:23 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:23 policy-apex-pdp | metadata.max.idle.ms = 300000 23:16:23 policy-apex-pdp | metric.reporters = [] 23:16:23 policy-apex-pdp | metrics.num.samples = 2 23:16:23 policy-apex-pdp | metrics.recording.level = INFO 23:16:23 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:23 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 23:16:23 policy-apex-pdp | partitioner.availability.timeout.ms = 0 23:16:23 policy-apex-pdp | partitioner.class = null 23:16:23 policy-apex-pdp | partitioner.ignore.keys = false 23:16:23 policy-apex-pdp | receive.buffer.bytes = 32768 23:16:23 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:23 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:23 policy-apex-pdp | request.timeout.ms = 30000 23:16:23 policy-apex-pdp | retries = 2147483647 23:16:23 policy-apex-pdp | retry.backoff.ms = 100 23:16:23 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:23 policy-apex-pdp | sasl.jaas.config = null 23:16:23 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:23 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:23 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:23 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:23 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:23 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:23 policy-apex-pdp | sasl.login.class = null 23:16:23 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:23 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:23 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:23 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:23 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:23 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:23 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:23 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:23 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:23 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:23 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:23 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:23 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:23 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:23 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:23 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:23 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:23 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:23 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:23 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:23 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:23 policy-pap | sasl.mechanism = GSSAPI 23:16:23 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:23 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:23 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:23 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:23 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:23 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:23 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:23 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:23 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:23 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:23 policy-pap | security.protocol = PLAINTEXT 23:16:23 policy-pap | security.providers = null 23:16:23 policy-pap | send.buffer.bytes = 131072 23:16:23 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:23 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:23 policy-pap | ssl.cipher.suites = null 23:16:23 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:23 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:23 policy-pap | ssl.engine.factory.class = null 23:16:23 policy-pap | ssl.key.password = null 23:16:23 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:23 policy-pap | ssl.keystore.certificate.chain = null 23:16:23 policy-pap | ssl.keystore.key = null 23:16:23 policy-pap | ssl.keystore.location = null 23:16:23 policy-pap | ssl.keystore.password = null 23:16:23 policy-pap | ssl.keystore.type = JKS 23:16:23 policy-pap | ssl.protocol = TLSv1.3 23:16:23 policy-pap | ssl.provider = null 23:16:23 policy-pap | ssl.secure.random.implementation = null 23:16:23 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:23 policy-pap | ssl.truststore.certificates = null 23:16:23 policy-pap | ssl.truststore.location = null 23:16:23 policy-pap | ssl.truststore.password = null 23:16:23 policy-pap | ssl.truststore.type = JKS 23:16:23 policy-pap | transaction.timeout.ms = 60000 23:16:23 policy-pap | transactional.id = null 23:16:23 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:23 policy-pap | 23:16:23 policy-pap | [2024-04-23T23:14:26.363+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:23 policy-pap | [2024-04-23T23:14:26.378+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:23 policy-pap | [2024-04-23T23:14:26.378+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:23 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:23 policy-apex-pdp | security.providers = null 23:16:23 policy-apex-pdp | send.buffer.bytes = 131072 23:16:23 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:23 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:23 policy-apex-pdp | ssl.cipher.suites = null 23:16:23 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:23 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:23 policy-apex-pdp | ssl.engine.factory.class = null 23:16:23 policy-apex-pdp | ssl.key.password = null 23:16:23 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:23 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:23 policy-apex-pdp | ssl.keystore.key = null 23:16:23 policy-apex-pdp | ssl.keystore.location = null 23:16:23 policy-apex-pdp | ssl.keystore.password = null 23:16:23 policy-apex-pdp | ssl.keystore.type = JKS 23:16:23 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:23 policy-apex-pdp | ssl.provider = null 23:16:23 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:23 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:23 policy-apex-pdp | ssl.truststore.certificates = null 23:16:23 policy-apex-pdp | ssl.truststore.location = null 23:16:23 policy-apex-pdp | ssl.truststore.password = null 23:16:23 policy-apex-pdp | ssl.truststore.type = JKS 23:16:23 policy-apex-pdp | transaction.timeout.ms = 60000 23:16:23 policy-apex-pdp | transactional.id = null 23:16:23 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:23 policy-apex-pdp | 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.148+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.164+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.164+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.164+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713914068164 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.164+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=2fbe19b2-254b-4c33-bda8-e44fc90c12a2, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.164+00:00|INFO|ServiceManager|main] service manager starting set alive 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.165+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.167+00:00|INFO|ServiceManager|main] service manager starting topic sinks 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.167+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.169+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.169+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.169+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.169+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=dd2a8f8f-9499-4211-bd29-a21fd7f46681, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@607fbe09 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.169+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=dd2a8f8f-9499-4211-bd29-a21fd7f46681, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.169+00:00|INFO|ServiceManager|main] service manager starting Create REST server 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.187+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 23:16:23 policy-apex-pdp | [] 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.189+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"6f256563-c602-4a4f-a522-665258578ca1","timestampMs":1713914068171,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup"} 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.333+00:00|INFO|ServiceManager|main] service manager starting Rest Server 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.333+00:00|INFO|ServiceManager|main] service manager starting 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.333+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.333+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5aabbb29{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@72c927f1{/,null,STOPPED}, connector=RestServerParameters@53ab0286{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.343+00:00|INFO|ServiceManager|main] service manager started 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.343+00:00|INFO|ServiceManager|main] service manager started 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.343+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 23:16:23 policy-pap | [2024-04-23T23:14:26.378+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713914066378 23:16:23 policy-pap | [2024-04-23T23:14:26.379+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e57c6980-1818-42f6-9f2b-b325adf74916, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:23 policy-pap | [2024-04-23T23:14:26.379+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d97d00d0-fd08-4063-b633-d61c34ffb5e8, alive=false, publisher=null]]: starting 23:16:23 policy-pap | [2024-04-23T23:14:26.379+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:23 policy-pap | acks = -1 23:16:23 policy-pap | auto.include.jmx.reporter = true 23:16:23 policy-pap | batch.size = 16384 23:16:23 policy-pap | bootstrap.servers = [kafka:9092] 23:16:23 policy-pap | buffer.memory = 33554432 23:16:23 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:23 policy-pap | client.id = producer-2 23:16:23 policy-pap | compression.type = none 23:16:23 policy-pap | connections.max.idle.ms = 540000 23:16:23 policy-pap | delivery.timeout.ms = 120000 23:16:23 policy-pap | enable.idempotence = true 23:16:23 policy-pap | interceptor.classes = [] 23:16:23 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:23 policy-pap | linger.ms = 0 23:16:23 policy-pap | max.block.ms = 60000 23:16:23 policy-pap | max.in.flight.requests.per.connection = 5 23:16:23 policy-pap | max.request.size = 1048576 23:16:23 policy-pap | metadata.max.age.ms = 300000 23:16:23 policy-pap | metadata.max.idle.ms = 300000 23:16:23 policy-pap | metric.reporters = [] 23:16:23 policy-pap | metrics.num.samples = 2 23:16:23 policy-pap | metrics.recording.level = INFO 23:16:23 policy-pap | metrics.sample.window.ms = 30000 23:16:23 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:23 policy-pap | partitioner.availability.timeout.ms = 0 23:16:23 policy-pap | partitioner.class = null 23:16:23 policy-pap | partitioner.ignore.keys = false 23:16:23 policy-pap | receive.buffer.bytes = 32768 23:16:23 policy-pap | reconnect.backoff.max.ms = 1000 23:16:23 policy-pap | reconnect.backoff.ms = 50 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.348+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5aabbb29{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@72c927f1{/,null,STOPPED}, connector=RestServerParameters@53ab0286{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.514+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: xy0CN7giRUOzslts55W0Ww 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.516+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.514+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dd2a8f8f-9499-4211-bd29-a21fd7f46681-2, groupId=dd2a8f8f-9499-4211-bd29-a21fd7f46681] Cluster ID: xy0CN7giRUOzslts55W0Ww 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.517+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dd2a8f8f-9499-4211-bd29-a21fd7f46681-2, groupId=dd2a8f8f-9499-4211-bd29-a21fd7f46681] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.526+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dd2a8f8f-9499-4211-bd29-a21fd7f46681-2, groupId=dd2a8f8f-9499-4211-bd29-a21fd7f46681] (Re-)joining group 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.555+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dd2a8f8f-9499-4211-bd29-a21fd7f46681-2, groupId=dd2a8f8f-9499-4211-bd29-a21fd7f46681] Request joining group due to: need to re-join with the given member-id: consumer-dd2a8f8f-9499-4211-bd29-a21fd7f46681-2-f6fd4c3e-b29d-4254-adbe-037697e1c482 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.556+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dd2a8f8f-9499-4211-bd29-a21fd7f46681-2, groupId=dd2a8f8f-9499-4211-bd29-a21fd7f46681] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.556+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dd2a8f8f-9499-4211-bd29-a21fd7f46681-2, groupId=dd2a8f8f-9499-4211-bd29-a21fd7f46681] (Re-)joining group 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.992+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 23:16:23 policy-apex-pdp | [2024-04-23T23:14:28.994+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 23:16:23 policy-apex-pdp | [2024-04-23T23:14:31.562+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dd2a8f8f-9499-4211-bd29-a21fd7f46681-2, groupId=dd2a8f8f-9499-4211-bd29-a21fd7f46681] Successfully joined group with generation Generation{generationId=1, memberId='consumer-dd2a8f8f-9499-4211-bd29-a21fd7f46681-2-f6fd4c3e-b29d-4254-adbe-037697e1c482', protocol='range'} 23:16:23 policy-apex-pdp | [2024-04-23T23:14:31.571+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dd2a8f8f-9499-4211-bd29-a21fd7f46681-2, groupId=dd2a8f8f-9499-4211-bd29-a21fd7f46681] Finished assignment for group at generation 1: {consumer-dd2a8f8f-9499-4211-bd29-a21fd7f46681-2-f6fd4c3e-b29d-4254-adbe-037697e1c482=Assignment(partitions=[policy-pdp-pap-0])} 23:16:23 policy-apex-pdp | [2024-04-23T23:14:31.580+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dd2a8f8f-9499-4211-bd29-a21fd7f46681-2, groupId=dd2a8f8f-9499-4211-bd29-a21fd7f46681] Successfully synced group in generation Generation{generationId=1, memberId='consumer-dd2a8f8f-9499-4211-bd29-a21fd7f46681-2-f6fd4c3e-b29d-4254-adbe-037697e1c482', protocol='range'} 23:16:23 policy-apex-pdp | [2024-04-23T23:14:31.580+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dd2a8f8f-9499-4211-bd29-a21fd7f46681-2, groupId=dd2a8f8f-9499-4211-bd29-a21fd7f46681] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:23 policy-apex-pdp | [2024-04-23T23:14:31.583+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dd2a8f8f-9499-4211-bd29-a21fd7f46681-2, groupId=dd2a8f8f-9499-4211-bd29-a21fd7f46681] Adding newly assigned partitions: policy-pdp-pap-0 23:16:23 policy-apex-pdp | [2024-04-23T23:14:31.591+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dd2a8f8f-9499-4211-bd29-a21fd7f46681-2, groupId=dd2a8f8f-9499-4211-bd29-a21fd7f46681] Found no committed offset for partition policy-pdp-pap-0 23:16:23 policy-apex-pdp | [2024-04-23T23:14:31.601+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dd2a8f8f-9499-4211-bd29-a21fd7f46681-2, groupId=dd2a8f8f-9499-4211-bd29-a21fd7f46681] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:23 policy-apex-pdp | [2024-04-23T23:14:48.170+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"9279c3a6-4c8e-4fc2-958d-aba5a6ed1655","timestampMs":1713914088170,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup"} 23:16:23 policy-apex-pdp | [2024-04-23T23:14:48.197+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"9279c3a6-4c8e-4fc2-958d-aba5a6ed1655","timestampMs":1713914088170,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup"} 23:16:23 policy-apex-pdp | [2024-04-23T23:14:48.200+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:23 policy-apex-pdp | [2024-04-23T23:14:48.333+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"source":"pap-c4065d14-a0bf-4092-a031-5d389147ed84","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c729b544-070b-4fbc-8f45-5f1397ac3912","timestampMs":1713914088272,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-apex-pdp | [2024-04-23T23:14:48.345+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 23:16:23 policy-apex-pdp | [2024-04-23T23:14:48.345+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"0dfbde6d-fdf0-4c97-8574-1659c8f765f0","timestampMs":1713914088345,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup"} 23:16:23 policy-apex-pdp | [2024-04-23T23:14:48.347+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c729b544-070b-4fbc-8f45-5f1397ac3912","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"d386bc37-2c97-4c73-8201-a95ea6d6f4f2","timestampMs":1713914088346,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-apex-pdp | [2024-04-23T23:14:48.376+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"0dfbde6d-fdf0-4c97-8574-1659c8f765f0","timestampMs":1713914088345,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup"} 23:16:23 policy-apex-pdp | [2024-04-23T23:14:48.376+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:23 policy-apex-pdp | [2024-04-23T23:14:48.383+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c729b544-070b-4fbc-8f45-5f1397ac3912","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"d386bc37-2c97-4c73-8201-a95ea6d6f4f2","timestampMs":1713914088346,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-apex-pdp | [2024-04-23T23:14:48.383+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:23 policy-apex-pdp | [2024-04-23T23:14:48.414+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"source":"pap-c4065d14-a0bf-4092-a031-5d389147ed84","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"6e9adc99-8665-4eb8-adc5-9505d322764a","timestampMs":1713914088273,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-apex-pdp | [2024-04-23T23:14:48.418+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"6e9adc99-8665-4eb8-adc5-9505d322764a","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"7b241b01-dd60-4faa-97f7-16154ece87ee","timestampMs":1713914088418,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-apex-pdp | [2024-04-23T23:14:48.428+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"6e9adc99-8665-4eb8-adc5-9505d322764a","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"7b241b01-dd60-4faa-97f7-16154ece87ee","timestampMs":1713914088418,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-apex-pdp | [2024-04-23T23:14:48.428+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:23 policy-apex-pdp | [2024-04-23T23:14:48.453+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"source":"pap-c4065d14-a0bf-4092-a031-5d389147ed84","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6df3431b-100f-44ac-aba2-9368f2ebc97d","timestampMs":1713914088430,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-apex-pdp | [2024-04-23T23:14:48.455+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6df3431b-100f-44ac-aba2-9368f2ebc97d","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"316de5f6-9d37-41f7-ba58-e7b26873131f","timestampMs":1713914088455,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-apex-pdp | [2024-04-23T23:14:48.464+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6df3431b-100f-44ac-aba2-9368f2ebc97d","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"316de5f6-9d37-41f7-ba58-e7b26873131f","timestampMs":1713914088455,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-apex-pdp | [2024-04-23T23:14:48.465+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:23 policy-apex-pdp | [2024-04-23T23:14:56.163+00:00|INFO|RequestLog|qtp1863100050-33] 172.17.0.5 - policyadmin [23/Apr/2024:23:14:56 +0000] "GET /metrics HTTP/1.1" 200 10649 "-" "Prometheus/2.51.2" 23:16:23 policy-apex-pdp | [2024-04-23T23:15:56.079+00:00|INFO|RequestLog|qtp1863100050-28] 172.17.0.5 - policyadmin [23/Apr/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10648 "-" "Prometheus/2.51.2" 23:16:23 kafka | [2024-04-23 23:14:27,130] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,130] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,130] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,130] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,130] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,130] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,130] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,130] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,130] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,130] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,130] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,130] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,130] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,130] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,130] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,130] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,130] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,130] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,130] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0660-toscaparameter.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-pap | request.timeout.ms = 30000 23:16:23 policy-pap | retries = 2147483647 23:16:23 policy-pap | retry.backoff.ms = 100 23:16:23 policy-pap | sasl.client.callback.handler.class = null 23:16:23 policy-pap | sasl.jaas.config = null 23:16:23 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:23 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:23 policy-pap | sasl.kerberos.service.name = null 23:16:23 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:23 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:23 policy-pap | sasl.login.callback.handler.class = null 23:16:23 policy-pap | sasl.login.class = null 23:16:23 policy-pap | sasl.login.connect.timeout.ms = null 23:16:23 policy-pap | sasl.login.read.timeout.ms = null 23:16:23 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:23 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:23 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:23 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:23 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:23 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:23 policy-pap | sasl.mechanism = GSSAPI 23:16:23 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:23 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:23 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:23 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:23 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:23 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:23 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:23 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:23 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:23 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:23 policy-pap | security.protocol = PLAINTEXT 23:16:23 policy-pap | security.providers = null 23:16:23 policy-pap | send.buffer.bytes = 131072 23:16:23 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:23 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:23 policy-pap | ssl.cipher.suites = null 23:16:23 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:23 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:23 policy-pap | ssl.engine.factory.class = null 23:16:23 policy-pap | ssl.key.password = null 23:16:23 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:23 policy-pap | ssl.keystore.certificate.chain = null 23:16:23 policy-pap | ssl.keystore.key = null 23:16:23 policy-pap | ssl.keystore.location = null 23:16:23 policy-pap | ssl.keystore.password = null 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0670-toscapolicies.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0690-toscapolicy.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0730-toscaproperty.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 23:16:23 policy-pap | ssl.keystore.type = JKS 23:16:23 policy-pap | ssl.protocol = TLSv1.3 23:16:23 policy-pap | ssl.provider = null 23:16:23 policy-pap | ssl.secure.random.implementation = null 23:16:23 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:23 policy-pap | ssl.truststore.certificates = null 23:16:23 policy-pap | ssl.truststore.location = null 23:16:23 policy-pap | ssl.truststore.password = null 23:16:23 policy-pap | ssl.truststore.type = JKS 23:16:23 policy-pap | transaction.timeout.ms = 60000 23:16:23 policy-pap | transactional.id = null 23:16:23 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:23 policy-pap | 23:16:23 policy-pap | [2024-04-23T23:14:26.380+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 23:16:23 policy-pap | [2024-04-23T23:14:26.382+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:23 policy-pap | [2024-04-23T23:14:26.382+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:23 policy-pap | [2024-04-23T23:14:26.382+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713914066382 23:16:23 policy-pap | [2024-04-23T23:14:26.382+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d97d00d0-fd08-4063-b633-d61c34ffb5e8, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:23 policy-pap | [2024-04-23T23:14:26.382+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 23:16:23 policy-pap | [2024-04-23T23:14:26.383+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 23:16:23 policy-pap | [2024-04-23T23:14:26.385+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 23:16:23 policy-pap | [2024-04-23T23:14:26.386+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 23:16:23 policy-pap | [2024-04-23T23:14:26.387+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 23:16:23 policy-pap | [2024-04-23T23:14:26.387+00:00|INFO|TimerManager|Thread-9] timer manager update started 23:16:23 policy-pap | [2024-04-23T23:14:26.388+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 23:16:23 policy-pap | [2024-04-23T23:14:26.388+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 23:16:23 policy-pap | [2024-04-23T23:14:26.389+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 23:16:23 policy-pap | [2024-04-23T23:14:26.394+00:00|INFO|ServiceManager|main] Policy PAP started 23:16:23 policy-pap | [2024-04-23T23:14:26.394+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 23:16:23 policy-pap | [2024-04-23T23:14:26.395+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.522 seconds (process running for 10.137) 23:16:23 policy-pap | [2024-04-23T23:14:26.872+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: xy0CN7giRUOzslts55W0Ww 23:16:23 policy-pap | [2024-04-23T23:14:26.874+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: xy0CN7giRUOzslts55W0Ww 23:16:23 policy-pap | [2024-04-23T23:14:26.875+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:23 policy-pap | [2024-04-23T23:14:26.875+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: xy0CN7giRUOzslts55W0Ww 23:16:23 policy-pap | [2024-04-23T23:14:26.908+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3, groupId=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:23 policy-pap | [2024-04-23T23:14:26.908+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3, groupId=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6] Cluster ID: xy0CN7giRUOzslts55W0Ww 23:16:23 policy-pap | [2024-04-23T23:14:26.993+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 23:16:23 policy-pap | [2024-04-23T23:14:26.998+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:23 policy-pap | [2024-04-23T23:14:26.998+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 23:16:23 policy-pap | [2024-04-23T23:14:27.043+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3, groupId=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:23 policy-pap | [2024-04-23T23:14:27.115+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:23 policy-pap | [2024-04-23T23:14:27.163+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3, groupId=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:23 policy-pap | [2024-04-23T23:14:27.229+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0770-toscarequirement.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0780-toscarequirements.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 zookeeper | ===> User 23:16:23 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:23 zookeeper | ===> Configuring ... 23:16:23 zookeeper | ===> Running preflight checks ... 23:16:23 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 23:16:23 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 23:16:23 zookeeper | ===> Launching ... 23:16:23 zookeeper | ===> Launching zookeeper ... 23:16:23 zookeeper | [2024-04-23 23:13:55,235] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:23 zookeeper | [2024-04-23 23:13:55,242] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:23 zookeeper | [2024-04-23 23:13:55,242] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:23 zookeeper | [2024-04-23 23:13:55,242] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:23 zookeeper | [2024-04-23 23:13:55,242] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:23 zookeeper | [2024-04-23 23:13:55,243] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:23 zookeeper | [2024-04-23 23:13:55,243] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:23 zookeeper | [2024-04-23 23:13:55,243] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:23 zookeeper | [2024-04-23 23:13:55,243] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 23:16:23 zookeeper | [2024-04-23 23:13:55,245] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 23:16:23 zookeeper | [2024-04-23 23:13:55,245] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:23 zookeeper | [2024-04-23 23:13:55,245] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:23 zookeeper | [2024-04-23 23:13:55,245] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:23 zookeeper | [2024-04-23 23:13:55,245] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:23 zookeeper | [2024-04-23 23:13:55,245] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:23 zookeeper | [2024-04-23 23:13:55,245] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 23:16:23 zookeeper | [2024-04-23 23:13:55,257] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@77eca502 (org.apache.zookeeper.server.ServerMetrics) 23:16:23 zookeeper | [2024-04-23 23:13:55,259] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:23 zookeeper | [2024-04-23 23:13:55,259] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:23 zookeeper | [2024-04-23 23:13:55,261] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:23 zookeeper | [2024-04-23 23:13:55,271] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,272] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,272] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,272] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,272] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,272] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,272] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,272] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,272] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,272] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 policy-db-migrator | > upgrade 0820-toscatrigger.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:23 zookeeper | [2024-04-23 23:13:55,273] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,273] INFO Server environment:host.name=df6985b66398 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,273] INFO Server environment:java.version=11.0.22 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,273] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,273] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 kafka | [2024-04-23 23:14:27,130] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,131] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,131] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,131] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,131] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,131] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,131] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,131] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,131] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,131] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,131] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,131] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,131] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,131] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,131] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,131] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,131] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,131] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,131] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 kafka | [2024-04-23 23:14:27,131] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,131] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,131] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,131] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,132] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,132] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,132] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,132] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,132] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,132] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,132] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,132] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,132] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,135] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,138] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,140] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 zookeeper | [2024-04-23 23:13:55,273] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,273] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,273] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,273] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,273] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,273] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,273] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,273] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,273] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,273] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,273] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,273] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,273] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,273] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,274] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,274] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,274] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,274] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,274] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,274] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,275] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 23:16:23 zookeeper | [2024-04-23 23:13:55,276] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,276] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,276] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:23 zookeeper | [2024-04-23 23:13:55,277] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:23 zookeeper | [2024-04-23 23:13:55,277] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:23 zookeeper | [2024-04-23 23:13:55,277] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:23 zookeeper | [2024-04-23 23:13:55,277] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:23 zookeeper | [2024-04-23 23:13:55,277] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:23 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0100-pdp.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 23:16:23 policy-db-migrator | JOIN pdpstatistics b 23:16:23 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 23:16:23 policy-db-migrator | SET a.id = b.id 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 23:16:23 zookeeper | [2024-04-23 23:13:55,277] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:23 zookeeper | [2024-04-23 23:13:55,277] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:23 zookeeper | [2024-04-23 23:13:55,280] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,280] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,280] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 23:16:23 zookeeper | [2024-04-23 23:13:55,280] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 23:16:23 zookeeper | [2024-04-23 23:13:55,280] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,300] INFO Logging initialized @495ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 23:16:23 zookeeper | [2024-04-23 23:13:55,386] WARN o.e.j.s.ServletContextHandler@6d5620ce{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 23:16:23 zookeeper | [2024-04-23 23:13:55,386] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 23:16:23 zookeeper | [2024-04-23 23:13:55,404] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 11.0.22+7-LTS (org.eclipse.jetty.server.Server) 23:16:23 zookeeper | [2024-04-23 23:13:55,429] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 23:16:23 zookeeper | [2024-04-23 23:13:55,429] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 23:16:23 zookeeper | [2024-04-23 23:13:55,430] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) 23:16:23 zookeeper | [2024-04-23 23:13:55,433] WARN ServletContext@o.e.j.s.ServletContextHandler@6d5620ce{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 23:16:23 zookeeper | [2024-04-23 23:13:55,440] INFO Started o.e.j.s.ServletContextHandler@6d5620ce{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 23:16:23 zookeeper | [2024-04-23 23:13:55,454] INFO Started ServerConnector@4d1bf319{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 23:16:23 zookeeper | [2024-04-23 23:13:55,455] INFO Started @650ms (org.eclipse.jetty.server.Server) 23:16:23 zookeeper | [2024-04-23 23:13:55,455] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,460] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:23 zookeeper | [2024-04-23 23:13:55,461] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:23 zookeeper | [2024-04-23 23:13:55,462] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:23 zookeeper | [2024-04-23 23:13:55,464] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:23 zookeeper | [2024-04-23 23:13:55,476] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:23 zookeeper | [2024-04-23 23:13:55,476] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:23 zookeeper | [2024-04-23 23:13:55,477] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 23:16:23 zookeeper | [2024-04-23 23:13:55,477] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 23:16:23 zookeeper | [2024-04-23 23:13:55,481] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 23:16:23 zookeeper | [2024-04-23 23:13:55,481] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:23 zookeeper | [2024-04-23 23:13:55,484] INFO Snapshot loaded in 6 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 23:16:23 zookeeper | [2024-04-23 23:13:55,485] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:23 zookeeper | [2024-04-23 23:13:55,485] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:23 zookeeper | [2024-04-23 23:13:55,499] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 23:16:23 zookeeper | [2024-04-23 23:13:55,499] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 23:16:23 zookeeper | [2024-04-23 23:13:55,517] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 23:16:23 zookeeper | [2024-04-23 23:13:55,517] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 23:16:23 zookeeper | [2024-04-23 23:13:56,892] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,142] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,142] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,150] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,152] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,152] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0210-sequence.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0220-sequence.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0120-toscatrigger.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0140-toscaparameter.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0150-toscaproperty.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.006750438Z level=info msg="Executing migration" id="Remove unique index org_id_name" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.007696314Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=945.696µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.011401514Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.015250797Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.848273ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.020209549Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.02027759Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=71.351µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.026541272Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.027525749Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=982.626µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.030773002Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.03191576Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.142118ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.0373954Z level=info msg="Executing migration" id="Drop old annotation table v4" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.037538062Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=141.522µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.04164596Z level=info msg="Executing migration" id="create annotation table v5" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.042775528Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.128918ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.04596531Z level=info msg="Executing migration" id="add index annotation 0 v3" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.047362923Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.396513ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.052365155Z level=info msg="Executing migration" id="add index annotation 1 v3" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.053665206Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.302391ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.06310694Z level=info msg="Executing migration" id="add index annotation 2 v3" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.064449823Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.342083ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.06911179Z level=info msg="Executing migration" id="add index annotation 3 v3" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.071114112Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.997642ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.077509547Z level=info msg="Executing migration" id="add index annotation 4 v3" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.078545164Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.035627ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.103041325Z level=info msg="Executing migration" id="Update annotation table charset" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.103104566Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=62.401µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.112235855Z level=info msg="Executing migration" id="Add column region_id to annotation table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.117910158Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=5.674823ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.123380468Z level=info msg="Executing migration" id="Drop category_id index" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.124614278Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.23416ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.128750686Z level=info msg="Executing migration" id="Add column tags to annotation table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.133751458Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=5.001032ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.141086947Z level=info msg="Executing migration" id="Create annotation_tag table v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.141898641Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=810.874µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.14856108Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.150105805Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.543805ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.153832926Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.155179888Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.346532ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.159162614Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.17421592Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=15.046956ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.181781134Z level=info msg="Executing migration" id="Create annotation_tag table v3" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.182648848Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=870.344µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.186253557Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0100-upgrade.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | select 'upgrade to 1100 completed' as msg 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | msg 23:16:23 policy-db-migrator | upgrade to 1100 completed 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0120-audit_sequence.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | TRUNCATE TABLE sequence 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | DROP TABLE pdpstatistics 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | DROP TABLE statistics_sequence 23:16:23 policy-db-migrator | -------------- 23:16:23 policy-db-migrator | 23:16:23 policy-db-migrator | policyadmin: OK: upgrade (1300) 23:16:23 policy-db-migrator | name version 23:16:23 policy-db-migrator | policyadmin 1300 23:16:23 policy-db-migrator | ID script operation from_version to_version tag success atTime 23:16:23 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:55 23:16:23 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:55 23:16:23 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:55 23:16:23 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:56 23:16:23 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:56 23:16:23 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:56 23:16:23 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:56 23:16:23 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:56 23:16:23 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:56 23:16:23 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:56 23:16:23 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:56 23:16:23 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:56 23:16:23 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:56 23:16:23 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:56 23:16:23 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:56 23:16:23 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:56 23:16:23 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:56 23:16:23 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:56 23:16:23 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:56 23:16:23 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:56 23:16:23 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:56 23:16:23 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:56 23:16:23 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:56 23:16:23 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:56 23:16:23 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:56 23:16:23 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:56 23:16:23 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:56 23:16:23 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:57 23:16:23 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:57 23:16:23 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:57 23:16:23 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:57 23:16:23 kafka | [2024-04-23 23:14:27,152] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,152] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,152] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,152] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,152] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,152] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,152] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,152] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,152] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,152] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,152] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,152] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.187413916Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.160369ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.191116966Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.191703066Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=585.8µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.196024007Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.196691168Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=666.831µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.200308967Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.200556811Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=247.434µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.203982607Z level=info msg="Executing migration" id="Add created time to annotation table" 23:16:23 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:57 23:16:23 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:57 23:16:23 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:57 23:16:23 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:57 23:16:23 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:57 23:16:23 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:57 23:16:23 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:57 23:16:23 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:57 23:16:23 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:57 23:16:23 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:57 23:16:23 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:57 23:16:23 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:57 23:16:23 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:57 23:16:23 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:57 23:16:23 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:57 23:16:23 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:57 23:16:23 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:57 23:16:23 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:57 23:16:23 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:57 23:16:23 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:58 23:16:23 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:58 23:16:23 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:58 23:16:23 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:58 23:16:23 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:58 23:16:23 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:58 23:16:23 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:58 23:16:23 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:58 23:16:23 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:58 23:16:23 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:58 23:16:23 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:58 23:16:23 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:58 23:16:23 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:58 23:16:23 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:58 23:16:23 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:58 23:16:23 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:58 23:16:23 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:58 23:16:23 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:58 23:16:23 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:58 23:16:23 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:58 23:16:23 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:58 23:16:23 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:58 23:16:23 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:58 23:16:23 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:59 23:16:23 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:59 23:16:23 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:59 23:16:23 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:59 23:16:23 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:59 23:16:23 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:59 23:16:23 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:59 23:16:23 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:59 23:16:23 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:59 23:16:23 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:59 23:16:23 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:59 23:16:23 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:59 23:16:23 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:59 23:16:23 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:59 23:16:23 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:59 23:16:23 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:59 23:16:23 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:59 23:16:23 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:59 23:16:23 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:13:59 23:16:23 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:14:00 23:16:23 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:14:00 23:16:23 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:14:00 23:16:23 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2304242313550800u 1 2024-04-23 23:14:00 23:16:23 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2304242313550900u 1 2024-04-23 23:14:00 23:16:23 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2304242313550900u 1 2024-04-23 23:14:00 23:16:23 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2304242313550900u 1 2024-04-23 23:14:00 23:16:23 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2304242313550900u 1 2024-04-23 23:14:00 23:16:23 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2304242313550900u 1 2024-04-23 23:14:00 23:16:23 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2304242313550900u 1 2024-04-23 23:14:00 23:16:23 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2304242313550900u 1 2024-04-23 23:14:00 23:16:23 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2304242313550900u 1 2024-04-23 23:14:00 23:16:23 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2304242313550900u 1 2024-04-23 23:14:00 23:16:23 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2304242313550900u 1 2024-04-23 23:14:00 23:16:23 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2304242313550900u 1 2024-04-23 23:14:00 23:16:23 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2304242313550900u 1 2024-04-23 23:14:00 23:16:23 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2304242313550900u 1 2024-04-23 23:14:00 23:16:23 policy-pap | [2024-04-23T23:14:27.273+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3, groupId=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:23 policy-pap | [2024-04-23T23:14:27.337+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:23 policy-pap | [2024-04-23T23:14:27.381+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3, groupId=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:23 policy-pap | [2024-04-23T23:14:27.447+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:23 policy-pap | [2024-04-23T23:14:27.489+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3, groupId=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:23 policy-pap | [2024-04-23T23:14:27.554+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:23 policy-pap | [2024-04-23T23:14:27.596+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3, groupId=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:23 policy-pap | [2024-04-23T23:14:27.662+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:23 policy-pap | [2024-04-23T23:14:27.704+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3, groupId=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:23 policy-pap | [2024-04-23T23:14:27.774+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:23 policy-pap | [2024-04-23T23:14:27.809+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3, groupId=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:23 policy-pap | [2024-04-23T23:14:27.887+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:23 policy-pap | [2024-04-23T23:14:27.894+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:23 policy-pap | [2024-04-23T23:14:27.919+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3, groupId=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:23 policy-pap | [2024-04-23T23:14:27.921+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3, groupId=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6] (Re-)joining group 23:16:23 policy-pap | [2024-04-23T23:14:27.940+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3, groupId=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6] Request joining group due to: need to re-join with the given member-id: consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3-6d738350-946d-4191-bc34-76a796a76349 23:16:23 policy-pap | [2024-04-23T23:14:27.941+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3, groupId=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:23 policy-pap | [2024-04-23T23:14:27.942+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3, groupId=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6] (Re-)joining group 23:16:23 policy-pap | [2024-04-23T23:14:27.942+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-36162ef8-e1db-4316-bf26-ea8170483c19 23:16:23 policy-pap | [2024-04-23T23:14:27.942+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:23 policy-pap | [2024-04-23T23:14:27.942+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:23 policy-pap | [2024-04-23T23:14:30.975+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3, groupId=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6] Successfully joined group with generation Generation{generationId=1, memberId='consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3-6d738350-946d-4191-bc34-76a796a76349', protocol='range'} 23:16:23 policy-pap | [2024-04-23T23:14:30.979+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-36162ef8-e1db-4316-bf26-ea8170483c19', protocol='range'} 23:16:23 policy-pap | [2024-04-23T23:14:30.988+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-36162ef8-e1db-4316-bf26-ea8170483c19=Assignment(partitions=[policy-pdp-pap-0])} 23:16:23 policy-pap | [2024-04-23T23:14:30.989+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3, groupId=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6] Finished assignment for group at generation 1: {consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3-6d738350-946d-4191-bc34-76a796a76349=Assignment(partitions=[policy-pdp-pap-0])} 23:16:23 policy-pap | [2024-04-23T23:14:31.044+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-36162ef8-e1db-4316-bf26-ea8170483c19', protocol='range'} 23:16:23 policy-pap | [2024-04-23T23:14:31.044+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:23 policy-pap | [2024-04-23T23:14:31.045+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3, groupId=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6] Successfully synced group in generation Generation{generationId=1, memberId='consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3-6d738350-946d-4191-bc34-76a796a76349', protocol='range'} 23:16:23 policy-pap | [2024-04-23T23:14:31.046+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3, groupId=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:23 policy-pap | [2024-04-23T23:14:31.052+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3, groupId=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6] Adding newly assigned partitions: policy-pdp-pap-0 23:16:23 policy-pap | [2024-04-23T23:14:31.052+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 23:16:23 policy-pap | [2024-04-23T23:14:31.073+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3, groupId=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6] Found no committed offset for partition policy-pdp-pap-0 23:16:23 policy-pap | [2024-04-23T23:14:31.074+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 23:16:23 policy-pap | [2024-04-23T23:14:31.098+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:23 policy-pap | [2024-04-23T23:14:31.099+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3, groupId=b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:23 policy-pap | [2024-04-23T23:14:33.140+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-5] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:23 policy-pap | [2024-04-23T23:14:33.140+00:00|INFO|DispatcherServlet|http-nio-6969-exec-5] Initializing Servlet 'dispatcherServlet' 23:16:23 policy-pap | [2024-04-23T23:14:33.143+00:00|INFO|DispatcherServlet|http-nio-6969-exec-5] Completed initialization in 2 ms 23:16:23 policy-pap | [2024-04-23T23:14:48.211+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 23:16:23 policy-pap | [] 23:16:23 policy-pap | [2024-04-23T23:14:48.212+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"9279c3a6-4c8e-4fc2-958d-aba5a6ed1655","timestampMs":1713914088170,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup"} 23:16:23 policy-pap | [2024-04-23T23:14:48.212+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:23 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"9279c3a6-4c8e-4fc2-958d-aba5a6ed1655","timestampMs":1713914088170,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup"} 23:16:23 policy-pap | [2024-04-23T23:14:48.222+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:23 policy-pap | [2024-04-23T23:14:48.291+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpUpdate starting 23:16:23 policy-pap | [2024-04-23T23:14:48.291+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpUpdate starting listener 23:16:23 policy-pap | [2024-04-23T23:14:48.291+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpUpdate starting timer 23:16:23 policy-pap | [2024-04-23T23:14:48.292+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=c729b544-070b-4fbc-8f45-5f1397ac3912, expireMs=1713914118292] 23:16:23 policy-pap | [2024-04-23T23:14:48.293+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpUpdate starting enqueue 23:16:23 policy-pap | [2024-04-23T23:14:48.293+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpUpdate started 23:16:23 policy-pap | [2024-04-23T23:14:48.293+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=c729b544-070b-4fbc-8f45-5f1397ac3912, expireMs=1713914118292] 23:16:23 policy-pap | [2024-04-23T23:14:48.296+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:23 policy-pap | {"source":"pap-c4065d14-a0bf-4092-a031-5d389147ed84","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c729b544-070b-4fbc-8f45-5f1397ac3912","timestampMs":1713914088272,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-pap | [2024-04-23T23:14:48.332+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:23 policy-pap | {"source":"pap-c4065d14-a0bf-4092-a031-5d389147ed84","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c729b544-070b-4fbc-8f45-5f1397ac3912","timestampMs":1713914088272,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-pap | [2024-04-23T23:14:48.333+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:23 policy-pap | [2024-04-23T23:14:48.334+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-pap | {"source":"pap-c4065d14-a0bf-4092-a031-5d389147ed84","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c729b544-070b-4fbc-8f45-5f1397ac3912","timestampMs":1713914088272,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-pap | [2024-04-23T23:14:48.334+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:23 policy-pap | [2024-04-23T23:14:48.355+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:23 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"0dfbde6d-fdf0-4c97-8574-1659c8f765f0","timestampMs":1713914088345,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup"} 23:16:23 policy-pap | [2024-04-23T23:14:48.361+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"0dfbde6d-fdf0-4c97-8574-1659c8f765f0","timestampMs":1713914088345,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup"} 23:16:23 policy-pap | [2024-04-23T23:14:48.362+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:23 policy-pap | [2024-04-23T23:14:48.371+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c729b544-070b-4fbc-8f45-5f1397ac3912","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"d386bc37-2c97-4c73-8201-a95ea6d6f4f2","timestampMs":1713914088346,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-pap | [2024-04-23T23:14:48.394+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpUpdate stopping 23:16:23 policy-pap | [2024-04-23T23:14:48.394+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpUpdate stopping enqueue 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,153] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,154] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,154] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,154] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,154] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,154] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,194] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,194] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,194] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,194] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,194] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,194] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,194] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,194] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,194] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,194] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,194] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,194] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,194] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,194] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,194] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,194] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,194] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.209117632Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=5.134635ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.213901339Z level=info msg="Executing migration" id="Add updated time to annotation table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.218327142Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.424513ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.222195155Z level=info msg="Executing migration" id="Add index for created in annotation table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.223377135Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.18172ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.228303615Z level=info msg="Executing migration" id="Add index for updated in annotation table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.230118785Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.82056ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.235763517Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.236128033Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=365.456µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.239930096Z level=info msg="Executing migration" id="Add epoch_end column" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.24506882Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=5.138404ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.252227927Z level=info msg="Executing migration" id="Add index for epoch_end" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.254381813Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=2.149776ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.258970147Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.259247902Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=248.604µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.264875314Z level=info msg="Executing migration" id="Move region to single row" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.265532025Z level=info msg="Migration successfully executed" id="Move region to single row" duration=661.092µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.269080563Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.27011811Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.029916ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.273168379Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.274032914Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=864.405µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.280733604Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.28172924Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=996.006µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.288141115Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.289644979Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.503364ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.293240009Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.294266275Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.026186ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.299066244Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.300024539Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=958.315µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.303139351Z level=info msg="Executing migration" id="Increase tags column to length 4096" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.303250162Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=114.832µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.307410891Z level=info msg="Executing migration" id="create test_data table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.308348256Z level=info msg="Migration successfully executed" id="create test_data table" duration=936.055µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.31408484Z level=info msg="Executing migration" id="create dashboard_version table v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.314972224Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=887.054µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.320745728Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.322163812Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.415904ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.329487112Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.331972452Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=2.47876ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.337680636Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.338214705Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=531.62µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.342164179Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.342619477Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=454.838µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.346801185Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.346987768Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=189.713µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.351287629Z level=info msg="Executing migration" id="create team table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.352272275Z level=info msg="Migration successfully executed" id="create team table" duration=984.386µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.355601109Z level=info msg="Executing migration" id="add index team.org_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.356668557Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.067427ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.362251878Z level=info msg="Executing migration" id="add unique index team_org_id_name" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.363398047Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.145629ms 23:16:23 policy-pap | [2024-04-23T23:14:48.394+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpUpdate stopping timer 23:16:23 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2304242313551000u 1 2024-04-23 23:14:00 23:16:23 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2304242313551000u 1 2024-04-23 23:14:00 23:16:23 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2304242313551000u 1 2024-04-23 23:14:00 23:16:23 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2304242313551000u 1 2024-04-23 23:14:01 23:16:23 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2304242313551000u 1 2024-04-23 23:14:01 23:16:23 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2304242313551000u 1 2024-04-23 23:14:01 23:16:23 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2304242313551000u 1 2024-04-23 23:14:01 23:16:23 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2304242313551000u 1 2024-04-23 23:14:01 23:16:23 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2304242313551000u 1 2024-04-23 23:14:01 23:16:23 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2304242313551100u 1 2024-04-23 23:14:01 23:16:23 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2304242313551200u 1 2024-04-23 23:14:01 23:16:23 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2304242313551200u 1 2024-04-23 23:14:01 23:16:23 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2304242313551200u 1 2024-04-23 23:14:01 23:16:23 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2304242313551200u 1 2024-04-23 23:14:01 23:16:23 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2304242313551300u 1 2024-04-23 23:14:01 23:16:23 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2304242313551300u 1 2024-04-23 23:14:01 23:16:23 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2304242313551300u 1 2024-04-23 23:14:01 23:16:23 policy-db-migrator | policyadmin: OK @ 1300 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,195] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,196] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 23:16:23 kafka | [2024-04-23 23:14:27,197] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,251] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,262] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,268] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,271] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,274] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,292] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,293] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,293] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,294] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,294] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,302] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,304] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,304] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,304] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,304] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,315] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,315] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,315] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,315] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,315] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,323] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,323] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,323] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,323] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,323] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,337] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,337] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,337] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,337] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,337] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,365] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,365] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,365] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,365] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,366] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,374] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 policy-pap | [2024-04-23T23:14:48.394+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=c729b544-070b-4fbc-8f45-5f1397ac3912, expireMs=1713914118292] 23:16:23 policy-pap | [2024-04-23T23:14:48.394+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpUpdate stopping listener 23:16:23 policy-pap | [2024-04-23T23:14:48.394+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpUpdate stopped 23:16:23 policy-pap | [2024-04-23T23:14:48.400+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:23 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c729b544-070b-4fbc-8f45-5f1397ac3912","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"d386bc37-2c97-4c73-8201-a95ea6d6f4f2","timestampMs":1713914088346,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-pap | [2024-04-23T23:14:48.401+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id c729b544-070b-4fbc-8f45-5f1397ac3912 23:16:23 policy-pap | [2024-04-23T23:14:48.402+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpUpdate successful 23:16:23 policy-pap | [2024-04-23T23:14:48.402+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 start publishing next request 23:16:23 policy-pap | [2024-04-23T23:14:48.402+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpStateChange starting 23:16:23 policy-pap | [2024-04-23T23:14:48.402+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpStateChange starting listener 23:16:23 policy-pap | [2024-04-23T23:14:48.402+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpStateChange starting timer 23:16:23 policy-pap | [2024-04-23T23:14:48.403+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=6e9adc99-8665-4eb8-adc5-9505d322764a, expireMs=1713914118403] 23:16:23 policy-pap | [2024-04-23T23:14:48.403+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpStateChange starting enqueue 23:16:23 policy-pap | [2024-04-23T23:14:48.403+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpStateChange started 23:16:23 policy-pap | [2024-04-23T23:14:48.403+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=6e9adc99-8665-4eb8-adc5-9505d322764a, expireMs=1713914118403] 23:16:23 policy-pap | [2024-04-23T23:14:48.404+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:23 policy-pap | {"source":"pap-c4065d14-a0bf-4092-a031-5d389147ed84","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"6e9adc99-8665-4eb8-adc5-9505d322764a","timestampMs":1713914088273,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-pap | [2024-04-23T23:14:48.419+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:23 policy-pap | {"source":"pap-c4065d14-a0bf-4092-a031-5d389147ed84","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"6e9adc99-8665-4eb8-adc5-9505d322764a","timestampMs":1713914088273,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-pap | [2024-04-23T23:14:48.420+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 23:16:23 policy-pap | [2024-04-23T23:14:48.432+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:23 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"6e9adc99-8665-4eb8-adc5-9505d322764a","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"7b241b01-dd60-4faa-97f7-16154ece87ee","timestampMs":1713914088418,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-pap | [2024-04-23T23:14:48.432+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 6e9adc99-8665-4eb8-adc5-9505d322764a 23:16:23 policy-pap | [2024-04-23T23:14:48.440+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-pap | {"source":"pap-c4065d14-a0bf-4092-a031-5d389147ed84","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"6e9adc99-8665-4eb8-adc5-9505d322764a","timestampMs":1713914088273,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-pap | [2024-04-23T23:14:48.441+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 23:16:23 policy-pap | [2024-04-23T23:14:48.444+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"6e9adc99-8665-4eb8-adc5-9505d322764a","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"7b241b01-dd60-4faa-97f7-16154ece87ee","timestampMs":1713914088418,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-pap | [2024-04-23T23:14:48.444+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpStateChange stopping 23:16:23 policy-pap | [2024-04-23T23:14:48.445+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpStateChange stopping enqueue 23:16:23 policy-pap | [2024-04-23T23:14:48.445+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpStateChange stopping timer 23:16:23 policy-pap | [2024-04-23T23:14:48.445+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=6e9adc99-8665-4eb8-adc5-9505d322764a, expireMs=1713914118403] 23:16:23 policy-pap | [2024-04-23T23:14:48.445+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpStateChange stopping listener 23:16:23 policy-pap | [2024-04-23T23:14:48.445+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpStateChange stopped 23:16:23 policy-pap | [2024-04-23T23:14:48.445+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpStateChange successful 23:16:23 policy-pap | [2024-04-23T23:14:48.445+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 start publishing next request 23:16:23 policy-pap | [2024-04-23T23:14:48.445+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpUpdate starting 23:16:23 policy-pap | [2024-04-23T23:14:48.445+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpUpdate starting listener 23:16:23 policy-pap | [2024-04-23T23:14:48.445+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpUpdate starting timer 23:16:23 policy-pap | [2024-04-23T23:14:48.445+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=6df3431b-100f-44ac-aba2-9368f2ebc97d, expireMs=1713914118445] 23:16:23 policy-pap | [2024-04-23T23:14:48.445+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpUpdate starting enqueue 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.370625655Z level=info msg="Executing migration" id="Add column uid in team" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.375392774Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.766709ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.378648847Z level=info msg="Executing migration" id="Update uid column values in team" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.378919611Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=270.484µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.386216601Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.38804228Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.829469ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.394765461Z level=info msg="Executing migration" id="create team member table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.39597106Z level=info msg="Migration successfully executed" id="create team member table" duration=1.20937ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.400622287Z level=info msg="Executing migration" id="add index team_member.org_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.401567832Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=945.565µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.407191063Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.408209281Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.018068ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.412548732Z level=info msg="Executing migration" id="add index team_member.team_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.413464916Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=915.964µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.418422927Z level=info msg="Executing migration" id="Add column email to team table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.424904834Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=6.477407ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.430456175Z level=info msg="Executing migration" id="Add column external to team_member table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.435295394Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.838619ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.446253883Z level=info msg="Executing migration" id="Add column permission to team_member table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.45154203Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=5.234196ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.454471808Z level=info msg="Executing migration" id="create dashboard acl table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.455585716Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.113478ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.458969902Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.460060169Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.090287ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.464524813Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.46562712Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.096457ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.46868232Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.469615126Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=932.816µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.474973923Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.476035311Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.060558ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.480630586Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.48149618Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=865.224µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.484895246Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.486414851Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.516035ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.492885307Z level=info msg="Executing migration" id="add index dashboard_permission" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.493969554Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.084977ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.500158445Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.500613004Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=455.199µs 23:16:23 policy-pap | [2024-04-23T23:14:48.445+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpUpdate started 23:16:23 policy-pap | [2024-04-23T23:14:48.446+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:23 policy-pap | {"source":"pap-c4065d14-a0bf-4092-a031-5d389147ed84","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6df3431b-100f-44ac-aba2-9368f2ebc97d","timestampMs":1713914088430,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-pap | [2024-04-23T23:14:48.455+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-pap | {"source":"pap-c4065d14-a0bf-4092-a031-5d389147ed84","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6df3431b-100f-44ac-aba2-9368f2ebc97d","timestampMs":1713914088430,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-pap | [2024-04-23T23:14:48.455+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:23 policy-pap | [2024-04-23T23:14:48.457+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:23 policy-pap | {"source":"pap-c4065d14-a0bf-4092-a031-5d389147ed84","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6df3431b-100f-44ac-aba2-9368f2ebc97d","timestampMs":1713914088430,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-pap | [2024-04-23T23:14:48.457+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:23 policy-pap | [2024-04-23T23:14:48.466+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:23 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6df3431b-100f-44ac-aba2-9368f2ebc97d","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"316de5f6-9d37-41f7-ba58-e7b26873131f","timestampMs":1713914088455,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-pap | [2024-04-23T23:14:48.467+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 6df3431b-100f-44ac-aba2-9368f2ebc97d 23:16:23 policy-pap | [2024-04-23T23:14:48.467+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:23 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6df3431b-100f-44ac-aba2-9368f2ebc97d","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"316de5f6-9d37-41f7-ba58-e7b26873131f","timestampMs":1713914088455,"name":"apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:23 policy-pap | [2024-04-23T23:14:48.468+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpUpdate stopping 23:16:23 policy-pap | [2024-04-23T23:14:48.468+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpUpdate stopping enqueue 23:16:23 policy-pap | [2024-04-23T23:14:48.468+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpUpdate stopping timer 23:16:23 policy-pap | [2024-04-23T23:14:48.468+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=6df3431b-100f-44ac-aba2-9368f2ebc97d, expireMs=1713914118445] 23:16:23 policy-pap | [2024-04-23T23:14:48.468+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpUpdate stopping listener 23:16:23 policy-pap | [2024-04-23T23:14:48.468+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpUpdate stopped 23:16:23 policy-pap | [2024-04-23T23:14:48.472+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 PdpUpdate successful 23:16:23 policy-pap | [2024-04-23T23:14:48.472+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-9a300069-96fa-4d3e-aa1e-60a6e45bc156 has no more requests 23:16:23 policy-pap | [2024-04-23T23:14:53.592+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. 23:16:23 policy-pap | [2024-04-23T23:14:53.635+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:23 policy-pap | [2024-04-23T23:14:53.667+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:23 policy-pap | [2024-04-23T23:14:53.669+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:23 policy-pap | [2024-04-23T23:14:54.070+00:00|INFO|SessionData|http-nio-6969-exec-8] unknown group testGroup 23:16:23 policy-pap | [2024-04-23T23:14:54.581+00:00|INFO|SessionData|http-nio-6969-exec-8] create cached group testGroup 23:16:23 policy-pap | [2024-04-23T23:14:54.581+00:00|INFO|SessionData|http-nio-6969-exec-8] creating DB group testGroup 23:16:23 policy-pap | [2024-04-23T23:14:55.125+00:00|INFO|SessionData|http-nio-6969-exec-2] cache group testGroup 23:16:23 policy-pap | [2024-04-23T23:14:55.338+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] Registering a deploy for policy onap.restart.tca 1.0.0 23:16:23 policy-pap | [2024-04-23T23:14:55.438+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 23:16:23 policy-pap | [2024-04-23T23:14:55.438+00:00|INFO|SessionData|http-nio-6969-exec-2] update cached group testGroup 23:16:23 policy-pap | [2024-04-23T23:14:55.439+00:00|INFO|SessionData|http-nio-6969-exec-2] updating DB group testGroup 23:16:23 policy-pap | [2024-04-23T23:14:55.452+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-04-23T23:14:55Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-04-23T23:14:55Z, user=policyadmin)] 23:16:23 policy-pap | [2024-04-23T23:14:56.177+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup 23:16:23 policy-pap | [2024-04-23T23:14:56.179+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 23:16:23 policy-pap | [2024-04-23T23:14:56.179+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy onap.restart.tca 1.0.0 23:16:23 policy-pap | [2024-04-23T23:14:56.179+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup 23:16:23 policy-pap | [2024-04-23T23:14:56.180+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup 23:16:23 policy-pap | [2024-04-23T23:14:56.193+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-23T23:14:56Z, user=policyadmin)] 23:16:23 policy-pap | [2024-04-23T23:14:56.495+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group defaultGroup 23:16:23 policy-pap | [2024-04-23T23:14:56.495+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group testGroup 23:16:23 policy-pap | [2024-04-23T23:14:56.496+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-7] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 23:16:23 policy-pap | [2024-04-23T23:14:56.496+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 23:16:23 policy-pap | [2024-04-23T23:14:56.496+00:00|INFO|SessionData|http-nio-6969-exec-7] update cached group testGroup 23:16:23 policy-pap | [2024-04-23T23:14:56.496+00:00|INFO|SessionData|http-nio-6969-exec-7] updating DB group testGroup 23:16:23 policy-pap | [2024-04-23T23:14:56.504+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-23T23:14:56Z, user=policyadmin)] 23:16:23 policy-pap | [2024-04-23T23:15:17.075+00:00|INFO|SessionData|http-nio-6969-exec-2] cache group testGroup 23:16:23 policy-pap | [2024-04-23T23:15:17.077+00:00|INFO|SessionData|http-nio-6969-exec-2] deleting DB group testGroup 23:16:23 policy-pap | [2024-04-23T23:15:18.293+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=c729b544-070b-4fbc-8f45-5f1397ac3912, expireMs=1713914118292] 23:16:23 policy-pap | [2024-04-23T23:15:18.403+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=6e9adc99-8665-4eb8-adc5-9505d322764a, expireMs=1713914118403] 23:16:23 kafka | [2024-04-23 23:14:27,376] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,376] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,377] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,377] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,387] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,388] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,388] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,388] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,388] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,400] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,400] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,401] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,401] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,401] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,409] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,410] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,410] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,410] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,410] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,418] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,418] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,418] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,418] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,418] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,425] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,426] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,426] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,426] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,426] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,434] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,435] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,435] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,435] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,435] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,446] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.503800136Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.504023269Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=223.554µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.508981621Z level=info msg="Executing migration" id="create tag table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.509798924Z level=info msg="Migration successfully executed" id="create tag table" duration=816.813µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.513174949Z level=info msg="Executing migration" id="add index tag.key_value" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.514116334Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=940.945µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.517346648Z level=info msg="Executing migration" id="create login attempt table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.518177961Z level=info msg="Migration successfully executed" id="create login attempt table" duration=830.503µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.523723852Z level=info msg="Executing migration" id="add index login_attempt.username" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.524635616Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=910.694µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.530612374Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.531684522Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.072418ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.535033747Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.549132318Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=14.09756ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.554523676Z level=info msg="Executing migration" id="create login_attempt v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.555277508Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=753.642µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.558502851Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.559415376Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=912.775µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.562557727Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.562934103Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=376.386µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.569900197Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.570488178Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=587.911µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.574648805Z level=info msg="Executing migration" id="create user auth table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.575397267Z level=info msg="Migration successfully executed" id="create user auth table" duration=748.512µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.578449577Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.579349933Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=898.796µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.588118826Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.588266798Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=154.112µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.59513315Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.60117065Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=6.03661ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.604278111Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.609564077Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.285655ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.61463475Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.620307552Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.671812ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.628008119Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.634862721Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=6.861062ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.639195532Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.640190918Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=994.876µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.643188988Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.648373922Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.184124ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.654195067Z level=info msg="Executing migration" id="create server_lock table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.655010531Z level=info msg="Migration successfully executed" id="create server_lock table" duration=816.744µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.66231529Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.663208565Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=893.215µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.667954582Z level=info msg="Executing migration" id="create user auth token table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.668803477Z level=info msg="Migration successfully executed" id="create user auth token table" duration=845.735µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.675228312Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.676578254Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.356532ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.682987269Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.687924269Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=4.92955ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.695164338Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.697062779Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.899261ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.703908571Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.708023639Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=4.114038ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.711584637Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.712877258Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.292771ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.717613235Z level=info msg="Executing migration" id="create cache_data table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.718792824Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.178889ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.723751816Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.724993087Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.245481ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.728435273Z level=info msg="Executing migration" id="create short_url table v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.729248806Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=817.773µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.737151616Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.73806Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=909.755µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.747514836Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.747694028Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=183.883µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.751517981Z level=info msg="Executing migration" id="delete alert_definition table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.751793915Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=272.825µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.770288028Z level=info msg="Executing migration" id="recreate alert_definition table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.771282834Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=996.126µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.782040591Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.7844795Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=2.506891ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.827055197Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.828492611Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.441924ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.834943156Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.835077629Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=144.403µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.838725718Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.840450557Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.722849ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.847476521Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.84860045Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.124389ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.853650383Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.854522597Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=872.114µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.862592349Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.8644816Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.899591ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.87301713Z level=info msg="Executing migration" id="Add column paused in alert_definition" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.885919031Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=12.891861ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.889764565Z level=info msg="Executing migration" id="drop alert_definition table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.890927443Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.162248ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.899691417Z level=info msg="Executing migration" id="delete alert_definition_version table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.89988107Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=190.863µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.903032971Z level=info msg="Executing migration" id="recreate alert_definition_version table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.904264642Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.231521ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.909554378Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.910713947Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.159039ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.917292484Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.918552705Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.259451ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.923845962Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.924137007Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=291.625µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.92802184Z level=info msg="Executing migration" id="drop alert_definition_version table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.928885224Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=863.404µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.935455772Z level=info msg="Executing migration" id="create alert_instance table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.936906906Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.451753ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.940449664Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.942001839Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.557135ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.946122517Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.947194994Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.072827ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.94997844Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.955936957Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.958247ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.963566672Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.964354945Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=787.503µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.968206228Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.969170084Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=963.206µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.974462611Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:56.99827777Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=23.802969ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.003310353Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.026346355Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=23.040333ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.032042388Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.033252138Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.20934ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.040473724Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.041687124Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.21371ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.044939257Z level=info msg="Executing migration" id="add current_reason column related to current_state" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.051522663Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=6.583186ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.055664191Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.061309112Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.646681ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.070327388Z level=info msg="Executing migration" id="create alert_rule table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.071923184Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.606046ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.082103808Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.083185596Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.086718ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.087921832Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.089148363Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.225891ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.092510047Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.096264507Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=3.75402ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.101138467Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.101308529Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=168.752µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.104507591Z level=info msg="Executing migration" id="add column for to alert_rule" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.111886951Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=7.3784ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.126741651Z level=info msg="Executing migration" id="add column annotations to alert_rule" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.132009406Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=5.273125ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.168587528Z level=info msg="Executing migration" id="add column labels to alert_rule" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.175403938Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.82018ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.184673699Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.185548433Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=877.624µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.189843643Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.190843009Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=999.056µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.194338675Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.199038182Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=4.700117ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.206866528Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.212998127Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.130879ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.217076393Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.217827676Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=750.503µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.221218761Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.226290043Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=5.070452ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.230714934Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.237109138Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.391054ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.24529959Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.245840879Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=545.459µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.250765328Z level=info msg="Executing migration" id="create alert_rule_version table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.252943744Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=2.177706ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.26254693Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.26383522Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.28802ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.268913603Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 23:16:23 kafka | [2024-04-23 23:14:27,450] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,450] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,450] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,451] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,460] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,461] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,461] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,462] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,462] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,471] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,472] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,472] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,472] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,472] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,485] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,487] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,488] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,488] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,489] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,501] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,501] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,501] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,501] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,502] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,508] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,508] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,508] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,508] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,508] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,514] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,515] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,515] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,515] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,515] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.270811194Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.897611ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.275365257Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.275439568Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=75.241µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.280608722Z level=info msg="Executing migration" id="add column for to alert_rule_version" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.285136886Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=4.528314ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.289877712Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.300153158Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=10.280927ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.306890087Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.313083148Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.192531ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.319817337Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.326707098Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.887101ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.330355967Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.337479522Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=7.114655ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.343940468Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.344015379Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=76.301µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.34720838Z level=info msg="Executing migration" id=create_alert_configuration_table 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.347923182Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=715.372µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.352433475Z level=info msg="Executing migration" id="Add column default in alert_configuration" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.359221354Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.781639ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.364720314Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.364771944Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=51.96µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.374472522Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.379067576Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.591583ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.38363276Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.384356381Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=723.561µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.389282001Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.393766813Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.484882ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.398098654Z level=info msg="Executing migration" id=create_ngalert_configuration_table 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.398663053Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=564.319µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.401958976Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.402656758Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=701.422µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.410540225Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.417956465Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=7.42239ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.424481681Z level=info msg="Executing migration" id="create provenance_type table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.425312605Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=832.414µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.42937138Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.430144252Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=768.912µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.434993261Z level=info msg="Executing migration" id="create alert_image table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.43558921Z level=info msg="Migration successfully executed" id="create alert_image table" duration=595.789µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.440443039Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.44115599Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=712.561µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.447536875Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.447589125Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=55.78µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.452939662Z level=info msg="Executing migration" id=create_alert_configuration_history_table 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.453684384Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=745.042µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.459443967Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.460209269Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=764.782µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.464510869Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.464795274Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.468820869Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.469126413Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=305.694µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.473808839Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.474564901Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=756.292µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.478864201Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.483830802Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=4.966271ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.487525852Z level=info msg="Executing migration" id="create library_element table v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.501472427Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=13.944045ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.506782324Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.508710894Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.927481ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.513916659Z level=info msg="Executing migration" id="create library_element_connection table v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.515396613Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.476524ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.518993991Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.521153586Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=2.159195ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.52879469Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.529912978Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.118358ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.536088378Z level=info msg="Executing migration" id="increase max description length to 2048" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.536188469Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=100.301µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.539421521Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.539563363Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=142.352µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.542733346Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.543169922Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=437.346µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.54979683Z level=info msg="Executing migration" id="create data_keys table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.550991209Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.194109ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.555832917Z level=info msg="Executing migration" id="create secrets table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.55660044Z level=info msg="Migration successfully executed" id="create secrets table" duration=767.713µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.560774677Z level=info msg="Executing migration" id="rename data_keys name column to id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.593284974Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=32.510477ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.597829447Z level=info msg="Executing migration" id="add name column into data_keys" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.604245471Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=6.417064ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.608926907Z level=info msg="Executing migration" id="copy data_keys id column values into name" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.609183531Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=256.754µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.613088075Z level=info msg="Executing migration" id="rename data_keys name column to label" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.647789366Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=34.701771ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.653300988Z level=info msg="Executing migration" id="rename data_keys id column back to name" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.683027904Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=29.728856ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.687920149Z level=info msg="Executing migration" id="create kv_store table v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.688997719Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.07857ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.694177201Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.69529721Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.120029ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.701096973Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.701304436Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=207.533µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.706249875Z level=info msg="Executing migration" id="create permission table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.70713668Z level=info msg="Migration successfully executed" id="create permission table" duration=886.906µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.712923843Z level=info msg="Executing migration" id="add unique index permission.role_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.715872925Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=2.943272ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.723038462Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.724539259Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.504277ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.735749798Z level=info msg="Executing migration" id="create role table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.736761746Z level=info msg="Migration successfully executed" id="create role table" duration=1.012068ms 23:16:23 kafka | [2024-04-23 23:14:27,523] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,523] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,523] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,524] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,524] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,533] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,534] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,534] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,535] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,535] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,568] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,568] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,568] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,569] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,569] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,576] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,577] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,577] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,577] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,577] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,584] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,585] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,585] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,585] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,585] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,592] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,593] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,593] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,593] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,594] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,602] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,603] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,603] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,603] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,603] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.749515052Z level=info msg="Executing migration" id="add column display_name" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.757191458Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.680237ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.760798482Z level=info msg="Executing migration" id="add column group_name" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.769011788Z level=info msg="Migration successfully executed" id="add column group_name" duration=8.210506ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.773689111Z level=info msg="Executing migration" id="add index role.org_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.774499815Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=810.664µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.779782479Z level=info msg="Executing migration" id="add unique index role_org_id_name" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.781574691Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.789482ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.789927019Z level=info msg="Executing migration" id="add index role_org_id_uid" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.791389005Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.465306ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.797364022Z level=info msg="Executing migration" id="create team role table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.798238287Z level=info msg="Migration successfully executed" id="create team role table" duration=874.565µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.804906745Z level=info msg="Executing migration" id="add index team_role.org_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.805783261Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=876.046µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.809417025Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.810619216Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.201651ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.816962349Z level=info msg="Executing migration" id="add index team_role.team_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.81812739Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.165121ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.824290919Z level=info msg="Executing migration" id="create user role table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.825357398Z level=info msg="Migration successfully executed" id="create user role table" duration=1.065709ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.829827447Z level=info msg="Executing migration" id="add index user_role.org_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.831161921Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.333674ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.834692654Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.835968936Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.272912ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.865730674Z level=info msg="Executing migration" id="add index user_role.user_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.866863634Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.13406ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.875228213Z level=info msg="Executing migration" id="create builtin role table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.876254871Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.024458ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.881106016Z level=info msg="Executing migration" id="add index builtin_role.role_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.882491852Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.390495ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.889952494Z level=info msg="Executing migration" id="add index builtin_role.name" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.891221016Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.270032ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.895559314Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.902738211Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=7.177297ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.907516946Z level=info msg="Executing migration" id="add index builtin_role.org_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.908876579Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.358904ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.912517135Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.913561673Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.047208ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.918413909Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.919385556Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=974.517µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.925431623Z level=info msg="Executing migration" id="add unique index role.uid" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.927038472Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.606659ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.930975132Z level=info msg="Executing migration" id="create seed assignment table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.932150103Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.177921ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.935301189Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.936140723Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=839.194µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.945219164Z level=info msg="Executing migration" id="add column hidden to role table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.957028375Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=11.8111ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.96296734Z level=info msg="Executing migration" id="permission kind migration" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.97198885Z level=info msg="Migration successfully executed" id="permission kind migration" duration=9.02172ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.974481174Z level=info msg="Executing migration" id="permission attribute migration" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.980359128Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.876864ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.983132798Z level=info msg="Executing migration" id="permission identifier migration" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.989994289Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=6.858151ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.996865251Z level=info msg="Executing migration" id="add permission identifier index" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:57.999098711Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=2.24062ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.002741825Z level=info msg="Executing migration" id="add permission action scope role_id index" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.004255991Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.513176ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.007667548Z level=info msg="Executing migration" id="remove permission role_id action scope index" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.008894347Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.226199ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.013602276Z level=info msg="Executing migration" id="create query_history table v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.014683899Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.082703ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.017924607Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.019859341Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.926454ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.024100757Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.024428434Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=327.976µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.032639892Z level=info msg="Executing migration" id="rbac disabled migrator" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.032796711Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=157.169µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.041362317Z level=info msg="Executing migration" id="teams permissions migration" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.042378216Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=1.015349ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.046284846Z level=info msg="Executing migration" id="dashboard permissions" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.047244903Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=960.867µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.050815767Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.051698609Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=882.683µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.056198678Z level=info msg="Executing migration" id="drop managed folder create actions" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.056551415Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=355.547µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.059851396Z level=info msg="Executing migration" id="alerting notification permissions" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.060469156Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=614.619µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.063813509Z level=info msg="Executing migration" id="create query_history_star table v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.064746924Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=932.185µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.069367219Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.070599259Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.23197ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.07596802Z level=info msg="Executing migration" id="add column org_id in query_history_star" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.085082443Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=9.113623ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.089619894Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.089868786Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=249.862µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.093529934Z level=info msg="Executing migration" id="create correlation table v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.094817497Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.286783ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.100959915Z level=info msg="Executing migration" id="add index correlations.uid" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.102886539Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.926494ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.108108253Z level=info msg="Executing migration" id="add index correlations.source_uid" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.109356044Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.247792ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.114258172Z level=info msg="Executing migration" id="add correlation config column" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.123751634Z level=info msg="Migration successfully executed" id="add correlation config column" duration=9.492892ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.128682794Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.129550996Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=866.722µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.133034255Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.133901578Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=867.293µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.139523001Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.169583212Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=30.052401ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.175172284Z level=info msg="Executing migration" id="create correlation v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.176222716Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.049812ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.182143264Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.183355312Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.212548ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.187658922Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 23:16:23 kafka | [2024-04-23 23:14:27,613] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,614] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,614] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,614] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,614] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,621] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,621] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,621] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,622] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,622] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,629] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,629] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,629] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,629] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,629] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,634] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,634] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,634] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,634] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,634] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,639] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,640] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,640] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,640] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,640] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,645] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,645] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,645] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,645] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,645] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,652] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,652] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,652] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,652] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,653] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(yAXDFsmnQuORxbK4D4bccg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.188964085Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.304903ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.194880673Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.196124314Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.243061ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.213285798Z level=info msg="Executing migration" id="copy correlation v1 to v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.213916819Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=631.66µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.221103809Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.222558509Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.454581ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.227832145Z level=info msg="Executing migration" id="add provisioning column" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.240195397Z level=info msg="Migration successfully executed" id="add provisioning column" duration=12.361502ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.244121568Z level=info msg="Executing migration" id="create entity_events table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.245860343Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.738235ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.250320859Z level=info msg="Executing migration" id="create dashboard public config v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.251674045Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.356696ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.258232564Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.258903567Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.263629417Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.264060607Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.267587329Z level=info msg="Executing migration" id="Drop old dashboard public config table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.268383468Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=792.799µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.272716848Z level=info msg="Executing migration" id="recreate dashboard public config v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.273892166Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.174368ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.278963722Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.280165181Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.200549ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.283667031Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.284818028Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.147486ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.289372308Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.290569757Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.197659ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.296620501Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.297608499Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=988.398µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.305833989Z level=info msg="Executing migration" id="Drop public config table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.307056128Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.221429ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.310678415Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.312588827Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.909212ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.316621334Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.317918407Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.296572ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.323255356Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.324459125Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.203769ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.327976276Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.329149463Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.173017ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.336213747Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.36670298Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=30.481992ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.371577926Z level=info msg="Executing migration" id="add annotations_enabled column" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.380436738Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.854582ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.385454741Z level=info msg="Executing migration" id="add time_selection_enabled column" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.392057223Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.602162ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.398070325Z level=info msg="Executing migration" id="delete orphaned public dashboards" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.398337228Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=270.653µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.402316671Z level=info msg="Executing migration" id="add share column" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.412960199Z level=info msg="Migration successfully executed" id="add share column" duration=10.642058ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.417062399Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.41728651Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=224.661µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.424595825Z level=info msg="Executing migration" id="create file table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.425995223Z level=info msg="Migration successfully executed" id="create file table" duration=1.397948ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.434433564Z level=info msg="Executing migration" id="file table idx: path natural pk" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.435667473Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.233749ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.43969809Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.440905748Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.207908ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.445923372Z level=info msg="Executing migration" id="create file_meta table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.448081258Z level=info msg="Migration successfully executed" id="create file_meta table" duration=2.157365ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.452896592Z level=info msg="Executing migration" id="file table idx: path key" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.454051778Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.154616ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.458930915Z level=info msg="Executing migration" id="set path collation in file table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.45901989Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=89.155µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.463409253Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.46416094Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=754.208µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.469957172Z level=info msg="Executing migration" id="managed permissions migration" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.470591362Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=634.451µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.47629864Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.476565553Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=267.523µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.480322425Z level=info msg="Executing migration" id="RBAC action name migrator" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.482600526Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=2.277831ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.486518617Z level=info msg="Executing migration" id="Add UID column to playlist" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.496119654Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.599376ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.501132308Z level=info msg="Executing migration" id="Update uid column values in playlist" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.501272994Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=140.966µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.506997483Z level=info msg="Executing migration" id="Add index for uid in playlist" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.509049003Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=2.05068ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.514404392Z level=info msg="Executing migration" id="update group index for alert rules" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.515034654Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=636.682µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.519053059Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.519401376Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=348.417µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.523458394Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.524035171Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=576.338µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.528393023Z level=info msg="Executing migration" id="add action column to seed_assignment" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.53777654Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.383237ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.554795197Z level=info msg="Executing migration" id="add scope column to seed_assignment" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.568001Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=13.207073ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.571741062Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.573062325Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.320804ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.576858551Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.649729195Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=72.872594ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.654347229Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.65518077Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=833.232µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.658590305Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.659382994Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=792.129µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.663015001Z level=info msg="Executing migration" id="add primary key to seed_assigment" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.688642287Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=25.627156ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.694443459Z level=info msg="Executing migration" id="add origin column to seed_assignment" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.702036058Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.591919ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.705644744Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.705949439Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=303.775µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.714250322Z level=info msg="Executing migration" id="prevent seeding OnCall access" 23:16:23 kafka | [2024-04-23 23:14:27,659] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,660] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,660] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,660] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,660] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,668] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,669] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,669] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,669] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,669] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,676] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,676] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,677] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,677] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,677] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,682] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,682] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,682] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,682] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,682] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,688] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,688] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,688] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,688] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,688] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,694] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,694] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,694] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,694] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,694] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,704] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,705] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,705] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,705] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,705] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,714] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,715] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,715] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,715] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,715] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,723] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,724] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,724] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,724] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,724] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,735] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,736] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,736] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,736] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,736] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,747] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,748] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,748] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,748] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,748] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,757] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,757] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,757] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,757] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,758] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,770] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,772] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,772] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,773] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,773] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,782] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,783] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,783] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,783] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.714423241Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=173.129µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.721380179Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.721800509Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=421.291µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.726736429Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.726982871Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=246.532µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.731387756Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.73167122Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=283.743µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.737456001Z level=info msg="Executing migration" id="create folder table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.738322223Z level=info msg="Migration successfully executed" id="create folder table" duration=893.564µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.743796029Z level=info msg="Executing migration" id="Add index for parent_uid" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.745526963Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.740795ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.750478414Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.751686503Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.208419ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.759749005Z level=info msg="Executing migration" id="Update folder title length" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.759780217Z level=info msg="Migration successfully executed" id="Update folder title length" duration=34.231µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.768845237Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.770674707Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.829729ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.775016517Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.776777803Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.760976ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.780472283Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.781611168Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.138786ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.787959337Z level=info msg="Executing migration" id="Sync dashboard and folder table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.789405218Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=1.450961ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.793430883Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.793834743Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=399.859µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.797173425Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.79849602Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.322835ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.80322434Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.804498781Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.273961ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.810059882Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.811204347Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.143385ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.820411006Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.822902287Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=2.47111ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.82812181Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.829280647Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.158147ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.832773196Z level=info msg="Executing migration" id="create anon_device table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.833777055Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.003089ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.838252313Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.839559337Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.306114ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.843093629Z level=info msg="Executing migration" id="add index anon_device.updated_at" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.844469385Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.375726ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.847900143Z level=info msg="Executing migration" id="create signing_key table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.849163644Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.263501ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.857810524Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.859119098Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.313044ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.866271226Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.867675584Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.404848ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.871377524Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.871868158Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=491.684µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.876927874Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 23:16:23 kafka | [2024-04-23 23:14:27,784] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,790] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,790] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,790] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,790] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,790] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,798] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:23 kafka | [2024-04-23 23:14:27,798] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:23 kafka | [2024-04-23 23:14:27,798] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,798] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 23:16:23 kafka | [2024-04-23 23:14:27,799] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(ZbEBaBpuQVOKapUMBCq56A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.884917093Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=7.988248ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.888688816Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.889207111Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=518.935µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.893726461Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.894556701Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=830.45µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.90953562Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.910786491Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.250691ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.916345711Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.917374391Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.02844ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.923211565Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.924435995Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.22415ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.928300952Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.929396506Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.095494ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.932806972Z level=info msg="Executing migration" id="create sso_setting table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.933835702Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.030249ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.939492147Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.940321108Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=829.591µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.943338894Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.94366118Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=322.876µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.948425151Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.948488754Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=64.283µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.954251054Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.96548039Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=11.233356ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.968796082Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.977838162Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.04149ms 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.981075669Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.98129157Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=215.56µs 23:16:23 grafana | logger=migrator t=2024-04-23T23:13:58.983664815Z level=info msg="migrations completed" performed=548 skipped=0 duration=4.194951093s 23:16:23 grafana | logger=sqlstore t=2024-04-23T23:13:58.990998902Z level=info msg="Created default admin" user=admin 23:16:23 grafana | logger=sqlstore t=2024-04-23T23:13:58.991276896Z level=info msg="Created default organization" 23:16:23 grafana | logger=secrets t=2024-04-23T23:13:58.995949622Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 23:16:23 grafana | logger=plugin.store t=2024-04-23T23:13:59.016279577Z level=info msg="Loading plugins..." 23:16:23 grafana | logger=local.finder t=2024-04-23T23:13:59.058674294Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 23:16:23 grafana | logger=plugin.store t=2024-04-23T23:13:59.058708505Z level=info msg="Plugins loaded" count=55 duration=42.429278ms 23:16:23 grafana | logger=query_data t=2024-04-23T23:13:59.061673168Z level=info msg="Query Service initialization" 23:16:23 grafana | logger=live.push_http t=2024-04-23T23:13:59.074000703Z level=info msg="Live Push Gateway initialization" 23:16:23 grafana | logger=ngalert.migration t=2024-04-23T23:13:59.080507668Z level=info msg=Starting 23:16:23 grafana | logger=ngalert.migration t=2024-04-23T23:13:59.081214242Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 23:16:23 grafana | logger=ngalert.migration orgID=1 t=2024-04-23T23:13:59.081642692Z level=info msg="Migrating alerts for organisation" 23:16:23 grafana | logger=ngalert.migration orgID=1 t=2024-04-23T23:13:59.082263762Z level=info msg="Alerts found to migrate" alerts=0 23:16:23 grafana | logger=ngalert.migration t=2024-04-23T23:13:59.083969685Z level=info msg="Completed alerting migration" 23:16:23 grafana | logger=ngalert.state.manager t=2024-04-23T23:13:59.113175794Z level=info msg="Running in alternative execution of Error/NoData mode" 23:16:23 grafana | logger=infra.usagestats.collector t=2024-04-23T23:13:59.115648214Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 23:16:23 grafana | logger=provisioning.datasources t=2024-04-23T23:13:59.118194047Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 23:16:23 grafana | logger=provisioning.alerting t=2024-04-23T23:13:59.129748645Z level=info msg="starting to provision alerting" 23:16:23 grafana | logger=provisioning.alerting t=2024-04-23T23:13:59.129810568Z level=info msg="finished to provision alerting" 23:16:23 grafana | logger=ngalert.state.manager t=2024-04-23T23:13:59.131168894Z level=info msg="Warming state cache for startup" 23:16:23 grafana | logger=http.server t=2024-04-23T23:13:59.133214292Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 23:16:23 grafana | logger=grafanaStorageLogger t=2024-04-23T23:13:59.135716383Z level=info msg="Storage starting" 23:16:23 grafana | logger=ngalert.multiorg.alertmanager t=2024-04-23T23:13:59.136191925Z level=info msg="Starting MultiOrg Alertmanager" 23:16:23 grafana | logger=ngalert.state.manager t=2024-04-23T23:13:59.180931605Z level=info msg="State cache has been initialized" states=0 duration=49.754862ms 23:16:23 grafana | logger=ngalert.scheduler t=2024-04-23T23:13:59.18103384Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 23:16:23 grafana | logger=ticker t=2024-04-23T23:13:59.181120174Z level=info msg=starting first_tick=2024-04-23T23:14:00Z 23:16:23 grafana | logger=provisioning.dashboard t=2024-04-23T23:13:59.197663553Z level=info msg="starting to provision dashboards" 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,806] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,820] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,822] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,823] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 grafana | logger=plugins.update.checker t=2024-04-23T23:13:59.230070548Z level=info msg="Update check succeeded" duration=94.295153ms 23:16:23 grafana | logger=grafana.update.checker t=2024-04-23T23:13:59.236428165Z level=info msg="Update check succeeded" duration=100.87609ms 23:16:23 grafana | logger=sqlstore.transactions t=2024-04-23T23:13:59.244487924Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:23 grafana | logger=sqlstore.transactions t=2024-04-23T23:13:59.308635751Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:23 grafana | logger=provisioning.dashboard t=2024-04-23T23:13:59.509434196Z level=info msg="finished to provision dashboards" 23:16:23 grafana | logger=grafana-apiserver t=2024-04-23T23:13:59.573050394Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 23:16:23 grafana | logger=grafana-apiserver t=2024-04-23T23:13:59.57358324Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 23:16:23 grafana | logger=infra.usagestats t=2024-04-23T23:15:36.149655082Z level=info msg="Usage stats are ready to report" 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,824] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,825] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,825] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,825] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,825] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,825] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,825] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,825] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,825] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,825] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,825] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,825] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,825] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,825] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,825] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,825] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,825] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,825] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,825] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,825] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,825] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,825] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,827] INFO [Broker id=1] Finished LeaderAndIsr request in 679ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,830] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 7 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,831] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,831] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,831] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,831] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,832] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,832] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,832] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,833] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 9 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,833] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,833] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,833] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,833] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,833] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,834] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,834] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,834] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,834] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=ZbEBaBpuQVOKapUMBCq56A, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=yAXDFsmnQuORxbK4D4bccg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,834] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,835] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,835] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,835] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,835] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,835] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,836] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,836] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,836] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,836] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,836] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,836] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,837] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,837] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,837] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,837] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,838] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,838] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,838] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,838] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,838] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,838] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,839] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,842] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 17 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,843] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,843] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,843] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,843] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,843] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,843] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,843] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,844] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 19 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,844] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:23 kafka | [2024-04-23 23:14:27,845] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,848] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,848] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,848] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,848] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,848] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,848] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,848] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,848] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,848] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,848] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,848] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,849] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,850] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,850] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,850] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,850] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,850] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,850] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,850] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,850] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,851] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,853] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:23 kafka | [2024-04-23 23:14:27,922] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-36162ef8-e1db-4316-bf26-ea8170483c19 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,924] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6 in Empty state. Created a new member id consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3-6d738350-946d-4191-bc34-76a796a76349 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,958] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-36162ef8-e1db-4316-bf26-ea8170483c19 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:27,958] INFO [GroupCoordinator 1]: Preparing to rebalance group b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6 in state PreparingRebalance with old generation 0 (__consumer_offsets-13) (reason: Adding new member consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3-6d738350-946d-4191-bc34-76a796a76349 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:28,553] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group dd2a8f8f-9499-4211-bd29-a21fd7f46681 in Empty state. Created a new member id consumer-dd2a8f8f-9499-4211-bd29-a21fd7f46681-2-f6fd4c3e-b29d-4254-adbe-037697e1c482 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:28,558] INFO [GroupCoordinator 1]: Preparing to rebalance group dd2a8f8f-9499-4211-bd29-a21fd7f46681 in state PreparingRebalance with old generation 0 (__consumer_offsets-3) (reason: Adding new member consumer-dd2a8f8f-9499-4211-bd29-a21fd7f46681-2-f6fd4c3e-b29d-4254-adbe-037697e1c482 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:30,971] INFO [GroupCoordinator 1]: Stabilized group b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6 generation 1 (__consumer_offsets-13) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:30,977] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:31,010] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-36162ef8-e1db-4316-bf26-ea8170483c19 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:31,010] INFO [GroupCoordinator 1]: Assignment received from leader consumer-b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6-3-6d738350-946d-4191-bc34-76a796a76349 for group b03b2677-3b04-4853-b0f5-3eb3a7e6ccd6 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:31,560] INFO [GroupCoordinator 1]: Stabilized group dd2a8f8f-9499-4211-bd29-a21fd7f46681 generation 1 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:23 kafka | [2024-04-23 23:14:31,577] INFO [GroupCoordinator 1]: Assignment received from leader consumer-dd2a8f8f-9499-4211-bd29-a21fd7f46681-2-f6fd4c3e-b29d-4254-adbe-037697e1c482 for group dd2a8f8f-9499-4211-bd29-a21fd7f46681 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:23 ++ echo 'Tearing down containers...' 23:16:23 Tearing down containers... 23:16:23 ++ docker-compose down -v --remove-orphans 23:16:24 Stopping policy-apex-pdp ... 23:16:24 Stopping grafana ... 23:16:24 Stopping policy-pap ... 23:16:24 Stopping kafka ... 23:16:24 Stopping policy-api ... 23:16:24 Stopping mariadb ... 23:16:24 Stopping prometheus ... 23:16:24 Stopping zookeeper ... 23:16:24 Stopping simulator ... 23:16:25 Stopping grafana ... done 23:16:25 Stopping prometheus ... done 23:16:34 Stopping policy-apex-pdp ... done 23:16:45 Stopping policy-pap ... done 23:16:45 Stopping simulator ... done 23:16:45 Stopping mariadb ... done 23:16:46 Stopping kafka ... done 23:16:46 Stopping zookeeper ... done 23:16:55 Stopping policy-api ... done 23:16:55 Removing policy-apex-pdp ... 23:16:55 Removing grafana ... 23:16:55 Removing policy-pap ... 23:16:55 Removing kafka ... 23:16:55 Removing policy-api ... 23:16:55 Removing policy-db-migrator ... 23:16:55 Removing mariadb ... 23:16:55 Removing prometheus ... 23:16:55 Removing zookeeper ... 23:16:55 Removing simulator ... 23:16:55 Removing policy-api ... done 23:16:55 Removing grafana ... done 23:16:55 Removing prometheus ... done 23:16:55 Removing mariadb ... done 23:16:55 Removing policy-apex-pdp ... done 23:16:55 Removing policy-db-migrator ... done 23:16:55 Removing kafka ... done 23:16:55 Removing simulator ... done 23:16:55 Removing zookeeper ... done 23:16:55 Removing policy-pap ... done 23:16:55 Removing network compose_default 23:16:55 ++ cd /w/workspace/policy-pap-master-project-csit-pap 23:16:55 + load_set 23:16:55 + _setopts=hxB 23:16:55 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:16:55 ++ tr : ' ' 23:16:55 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:55 + set +o braceexpand 23:16:55 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:55 + set +o hashall 23:16:55 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:55 + set +o interactive-comments 23:16:55 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:55 + set +o xtrace 23:16:55 ++ echo hxB 23:16:55 ++ sed 's/./& /g' 23:16:55 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:55 + set +h 23:16:55 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:55 + set +x 23:16:55 + rsync /w/workspace/policy-pap-master-project-csit-pap/compose/docker_compose.log /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:16:55 + [[ -n /tmp/tmp.701UMRpgdq ]] 23:16:55 + rsync -av /tmp/tmp.701UMRpgdq/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:16:55 sending incremental file list 23:16:55 ./ 23:16:55 log.html 23:16:55 output.xml 23:16:55 report.html 23:16:55 testplan.txt 23:16:55 23:16:55 sent 919,538 bytes received 95 bytes 1,839,266.00 bytes/sec 23:16:55 total size is 918,992 speedup is 1.00 23:16:55 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:16:55 + exit 0 23:16:55 $ ssh-agent -k 23:16:55 unset SSH_AUTH_SOCK; 23:16:55 unset SSH_AGENT_PID; 23:16:55 echo Agent pid 2056 killed; 23:16:55 [ssh-agent] Stopped. 23:16:56 Robot results publisher started... 23:16:56 INFO: Checking test criticality is deprecated and will be dropped in a future release! 23:16:56 -Parsing output xml: 23:16:56 Done! 23:16:56 WARNING! Could not find file: **/log.html 23:16:56 WARNING! Could not find file: **/report.html 23:16:56 -Copying log files to build dir: 23:16:56 Done! 23:16:56 -Assigning results to build: 23:16:56 Done! 23:16:56 -Checking thresholds: 23:16:56 Done! 23:16:56 Done publishing Robot results. 23:16:56 [PostBuildScript] - [INFO] Executing post build scripts. 23:16:56 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12855354532872141111.sh 23:16:56 ---> sysstat.sh 23:16:57 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins15874863738730500723.sh 23:16:57 ---> package-listing.sh 23:16:57 ++ facter osfamily 23:16:57 ++ tr '[:upper:]' '[:lower:]' 23:16:57 + OS_FAMILY=debian 23:16:57 + workspace=/w/workspace/policy-pap-master-project-csit-pap 23:16:57 + START_PACKAGES=/tmp/packages_start.txt 23:16:57 + END_PACKAGES=/tmp/packages_end.txt 23:16:57 + DIFF_PACKAGES=/tmp/packages_diff.txt 23:16:57 + PACKAGES=/tmp/packages_start.txt 23:16:57 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:16:57 + PACKAGES=/tmp/packages_end.txt 23:16:57 + case "${OS_FAMILY}" in 23:16:57 + dpkg -l 23:16:57 + grep '^ii' 23:16:57 + '[' -f /tmp/packages_start.txt ']' 23:16:57 + '[' -f /tmp/packages_end.txt ']' 23:16:57 + diff /tmp/packages_start.txt /tmp/packages_end.txt 23:16:57 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:16:57 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:16:57 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:16:57 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins14454343651408450861.sh 23:16:57 ---> capture-instance-metadata.sh 23:16:57 Setup pyenv: 23:16:57 system 23:16:57 3.8.13 23:16:57 3.9.13 23:16:57 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:16:57 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-TV3n from file:/tmp/.os_lf_venv 23:16:58 lf-activate-venv(): INFO: Installing: lftools 23:17:09 lf-activate-venv(): INFO: Adding /tmp/venv-TV3n/bin to PATH 23:17:09 INFO: Running in OpenStack, capturing instance metadata 23:17:09 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins18214308995562534395.sh 23:17:09 provisioning config files... 23:17:09 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config1006715612320662601tmp 23:17:09 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 23:17:09 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 23:17:09 [EnvInject] - Injecting environment variables from a build step. 23:17:10 [EnvInject] - Injecting as environment variables the properties content 23:17:10 SERVER_ID=logs 23:17:10 23:17:10 [EnvInject] - Variables injected successfully. 23:17:10 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10726801019785958045.sh 23:17:10 ---> create-netrc.sh 23:17:10 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2614027787226563546.sh 23:17:10 ---> python-tools-install.sh 23:17:10 Setup pyenv: 23:17:10 system 23:17:10 3.8.13 23:17:10 3.9.13 23:17:10 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:10 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-TV3n from file:/tmp/.os_lf_venv 23:17:11 lf-activate-venv(): INFO: Installing: lftools 23:17:19 lf-activate-venv(): INFO: Adding /tmp/venv-TV3n/bin to PATH 23:17:19 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins16824317445047273344.sh 23:17:19 ---> sudo-logs.sh 23:17:19 Archiving 'sudo' log.. 23:17:20 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7478789307168769287.sh 23:17:20 ---> job-cost.sh 23:17:20 Setup pyenv: 23:17:20 system 23:17:20 3.8.13 23:17:20 3.9.13 23:17:20 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:20 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-TV3n from file:/tmp/.os_lf_venv 23:17:21 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 23:17:26 lf-activate-venv(): INFO: Adding /tmp/venv-TV3n/bin to PATH 23:17:26 INFO: No Stack... 23:17:26 INFO: Retrieving Pricing Info for: v3-standard-8 23:17:27 INFO: Archiving Costs 23:17:27 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins988194079073182633.sh 23:17:27 ---> logs-deploy.sh 23:17:27 Setup pyenv: 23:17:27 system 23:17:27 3.8.13 23:17:27 3.9.13 23:17:27 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:27 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-TV3n from file:/tmp/.os_lf_venv 23:17:28 lf-activate-venv(): INFO: Installing: lftools 23:17:37 lf-activate-venv(): INFO: Adding /tmp/venv-TV3n/bin to PATH 23:17:37 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1656 23:17:37 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 23:17:38 Archives upload complete. 23:17:38 INFO: archiving logs to Nexus 23:17:39 ---> uname -a: 23:17:39 Linux prd-ubuntu1804-docker-8c-8g-25416 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 23:17:39 23:17:39 23:17:39 ---> lscpu: 23:17:39 Architecture: x86_64 23:17:39 CPU op-mode(s): 32-bit, 64-bit 23:17:39 Byte Order: Little Endian 23:17:39 CPU(s): 8 23:17:39 On-line CPU(s) list: 0-7 23:17:39 Thread(s) per core: 1 23:17:39 Core(s) per socket: 1 23:17:39 Socket(s): 8 23:17:39 NUMA node(s): 1 23:17:39 Vendor ID: AuthenticAMD 23:17:39 CPU family: 23 23:17:39 Model: 49 23:17:39 Model name: AMD EPYC-Rome Processor 23:17:39 Stepping: 0 23:17:39 CPU MHz: 2799.998 23:17:39 BogoMIPS: 5599.99 23:17:39 Virtualization: AMD-V 23:17:39 Hypervisor vendor: KVM 23:17:39 Virtualization type: full 23:17:39 L1d cache: 32K 23:17:39 L1i cache: 32K 23:17:39 L2 cache: 512K 23:17:39 L3 cache: 16384K 23:17:39 NUMA node0 CPU(s): 0-7 23:17:39 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 23:17:39 23:17:39 23:17:39 ---> nproc: 23:17:39 8 23:17:39 23:17:39 23:17:39 ---> df -h: 23:17:39 Filesystem Size Used Avail Use% Mounted on 23:17:39 udev 16G 0 16G 0% /dev 23:17:39 tmpfs 3.2G 708K 3.2G 1% /run 23:17:39 /dev/vda1 155G 14G 142G 9% / 23:17:39 tmpfs 16G 0 16G 0% /dev/shm 23:17:39 tmpfs 5.0M 0 5.0M 0% /run/lock 23:17:39 tmpfs 16G 0 16G 0% /sys/fs/cgroup 23:17:39 /dev/vda15 105M 4.4M 100M 5% /boot/efi 23:17:39 tmpfs 3.2G 0 3.2G 0% /run/user/1001 23:17:39 23:17:39 23:17:39 ---> free -m: 23:17:39 total used free shared buff/cache available 23:17:39 Mem: 32167 849 25164 0 6153 30861 23:17:39 Swap: 1023 0 1023 23:17:39 23:17:39 23:17:39 ---> ip addr: 23:17:39 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 23:17:39 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 23:17:39 inet 127.0.0.1/8 scope host lo 23:17:39 valid_lft forever preferred_lft forever 23:17:39 inet6 ::1/128 scope host 23:17:39 valid_lft forever preferred_lft forever 23:17:39 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 23:17:39 link/ether fa:16:3e:86:ba:21 brd ff:ff:ff:ff:ff:ff 23:17:39 inet 10.30.107.55/23 brd 10.30.107.255 scope global dynamic ens3 23:17:39 valid_lft 85962sec preferred_lft 85962sec 23:17:39 inet6 fe80::f816:3eff:fe86:ba21/64 scope link 23:17:39 valid_lft forever preferred_lft forever 23:17:39 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 23:17:39 link/ether 02:42:ce:21:1a:8b brd ff:ff:ff:ff:ff:ff 23:17:39 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 23:17:39 valid_lft forever preferred_lft forever 23:17:39 23:17:39 23:17:39 ---> sar -b -r -n DEV: 23:17:39 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-25416) 04/23/24 _x86_64_ (8 CPU) 23:17:39 23:17:39 23:10:23 LINUX RESTART (8 CPU) 23:17:39 23:17:39 23:11:01 tps rtps wtps bread/s bwrtn/s 23:17:39 23:12:01 103.70 17.73 85.97 1021.83 29653.86 23:17:39 23:13:01 152.41 23.21 129.20 2791.67 38313.88 23:17:39 23:14:01 510.73 12.68 498.05 762.31 153688.27 23:17:39 23:15:01 37.48 0.45 37.03 37.19 26883.39 23:17:39 23:16:01 18.89 0.93 17.96 19.59 22926.69 23:17:39 23:17:01 66.72 0.90 65.82 52.92 25124.51 23:17:39 Average: 148.32 9.32 139.00 780.90 49431.03 23:17:39 23:17:39 23:11:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:17:39 23:12:01 30077148 31689524 2862072 8.69 73124 1847804 1436476 4.23 888244 1678012 182828 23:17:39 23:13:01 27882068 31671156 5057152 15.35 114224 3907268 1404412 4.13 988288 3645540 1822628 23:17:39 23:14:01 24999328 30873944 7939892 24.10 155364 5831604 7141860 21.01 1909520 5433452 1320 23:17:39 23:15:01 23583316 29571544 9355904 28.40 157128 5940220 8796608 25.88 3293568 5454064 452 23:17:39 23:16:01 23567136 29557204 9372084 28.45 157312 5941380 8793800 25.87 3321052 5441820 420 23:17:39 23:17:01 25774024 31600224 7165196 21.75 158384 5793588 1507992 4.44 1322164 5297436 4412 23:17:39 Average: 25980503 30827266 6958717 21.13 135923 4876977 4846858 14.26 1953806 4491721 335343 23:17:39 23:17:39 23:11:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:17:39 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:39 23:12:01 ens3 61.44 42.53 976.02 8.07 0.00 0.00 0.00 0.00 23:17:39 23:12:01 lo 1.67 1.67 0.19 0.19 0.00 0.00 0.00 0.00 23:17:39 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:39 23:13:01 ens3 516.36 279.84 13049.01 24.51 0.00 0.00 0.00 0.00 23:17:39 23:13:01 lo 9.27 9.27 0.91 0.91 0.00 0.00 0.00 0.00 23:17:39 23:13:01 br-05be8ff8bad5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:39 23:14:01 veth38b1e14 36.96 29.26 4.37 2.66 0.00 0.00 0.00 0.00 23:17:39 23:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:39 23:14:01 vethf07f00e 1.68 1.73 0.17 0.18 0.00 0.00 0.00 0.00 23:17:39 23:14:01 ens3 751.81 313.73 18193.21 22.07 0.00 0.00 0.00 0.00 23:17:39 23:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:39 23:15:01 vethf07f00e 15.28 13.15 1.95 1.98 0.00 0.00 0.00 0.00 23:17:39 23:15:01 ens3 4.42 3.90 0.88 1.08 0.00 0.00 0.00 0.00 23:17:39 23:15:01 veth8c3e035 0.58 0.92 0.06 0.32 0.00 0.00 0.00 0.00 23:17:39 23:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:39 23:16:01 vethf07f00e 13.83 9.33 1.05 1.34 0.00 0.00 0.00 0.00 23:17:39 23:16:01 ens3 1.53 1.40 0.37 0.47 0.00 0.00 0.00 0.00 23:17:39 23:16:01 veth8c3e035 0.20 0.12 0.01 0.01 0.00 0.00 0.00 0.00 23:17:39 23:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:39 23:17:01 ens3 64.34 44.63 65.25 29.07 0.00 0.00 0.00 0.00 23:17:39 23:17:01 lo 34.86 34.86 6.21 6.21 0.00 0.00 0.00 0.00 23:17:39 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:39 Average: ens3 233.31 114.33 5380.64 14.21 0.00 0.00 0.00 0.00 23:17:39 Average: lo 5.15 5.15 0.98 0.98 0.00 0.00 0.00 0.00 23:17:39 23:17:39 23:17:39 ---> sar -P ALL: 23:17:39 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-25416) 04/23/24 _x86_64_ (8 CPU) 23:17:39 23:17:39 23:10:23 LINUX RESTART (8 CPU) 23:17:39 23:17:39 23:11:01 CPU %user %nice %system %iowait %steal %idle 23:17:39 23:12:01 all 10.28 0.00 0.71 1.81 0.03 87.16 23:17:39 23:12:01 0 9.73 0.00 0.52 0.35 0.02 89.39 23:17:39 23:12:01 1 35.75 0.00 2.09 1.23 0.07 60.86 23:17:39 23:12:01 2 3.12 0.00 0.25 0.30 0.07 96.26 23:17:39 23:12:01 3 13.86 0.00 0.68 0.07 0.02 85.37 23:17:39 23:12:01 4 1.47 0.00 0.32 0.08 0.00 98.13 23:17:39 23:12:01 5 7.92 0.00 0.75 0.75 0.02 90.56 23:17:39 23:12:01 6 2.44 0.00 0.52 11.56 0.05 85.43 23:17:39 23:12:01 7 7.97 0.00 0.52 0.18 0.02 91.31 23:17:39 23:13:01 all 12.64 0.00 3.05 1.46 0.05 82.80 23:17:39 23:13:01 0 13.27 0.00 2.88 0.89 0.05 82.92 23:17:39 23:13:01 1 14.70 0.00 3.13 3.60 0.05 78.51 23:17:39 23:13:01 2 20.84 0.00 2.83 0.39 0.07 75.88 23:17:39 23:13:01 3 12.27 0.00 3.36 1.11 0.05 83.21 23:17:39 23:13:01 4 17.33 0.00 3.56 0.30 0.05 78.75 23:17:39 23:13:01 5 7.85 0.00 2.88 0.13 0.03 89.11 23:17:39 23:13:01 6 8.40 0.00 2.76 3.34 0.03 85.47 23:17:39 23:13:01 7 6.45 0.00 3.00 1.87 0.08 88.60 23:17:39 23:14:01 all 14.35 0.00 5.42 7.70 0.07 72.47 23:17:39 23:14:01 0 13.95 0.00 5.69 1.47 0.07 78.81 23:17:39 23:14:01 1 14.96 0.00 5.10 15.18 0.08 64.67 23:17:39 23:14:01 2 13.78 0.00 4.87 3.24 0.07 78.04 23:17:39 23:14:01 3 14.84 0.00 5.62 1.02 0.05 78.48 23:17:39 23:14:01 4 15.87 0.00 5.41 1.90 0.07 76.75 23:17:39 23:14:01 5 13.60 0.00 5.91 2.04 0.07 78.39 23:17:39 23:14:01 6 12.46 0.00 5.75 27.88 0.10 53.81 23:17:39 23:14:01 7 15.30 0.00 4.97 8.98 0.05 70.69 23:17:39 23:15:01 all 25.95 0.00 2.49 0.78 0.08 70.70 23:17:39 23:15:01 0 34.47 0.00 3.44 0.02 0.08 61.99 23:17:39 23:15:01 1 21.44 0.00 2.31 0.74 0.07 75.44 23:17:39 23:15:01 2 23.39 0.00 1.64 0.00 0.08 74.89 23:17:39 23:15:01 3 32.58 0.00 2.82 0.02 0.08 64.50 23:17:39 23:15:01 4 30.48 0.00 3.01 0.02 0.08 66.40 23:17:39 23:15:01 5 25.95 0.00 3.26 5.39 0.07 65.33 23:17:39 23:15:01 6 20.28 0.00 1.79 0.02 0.07 77.84 23:17:39 23:15:01 7 19.02 0.00 1.74 0.03 0.07 79.14 23:17:39 23:16:01 all 1.09 0.00 0.17 0.82 0.05 97.88 23:17:39 23:16:01 0 1.02 0.00 0.22 0.02 0.07 98.68 23:17:39 23:16:01 1 0.94 0.00 0.28 0.03 0.07 98.68 23:17:39 23:16:01 2 0.59 0.00 0.13 0.07 0.03 99.18 23:17:39 23:16:01 3 1.03 0.00 0.12 0.00 0.03 98.82 23:17:39 23:16:01 4 1.18 0.00 0.12 0.05 0.03 98.62 23:17:39 23:16:01 5 1.44 0.00 0.12 6.31 0.05 92.09 23:17:39 23:16:01 6 1.37 0.00 0.12 0.00 0.05 98.46 23:17:39 23:16:01 7 1.13 0.00 0.22 0.05 0.07 98.53 23:17:39 23:17:01 all 2.58 0.00 0.61 1.03 0.05 95.74 23:17:39 23:17:01 0 2.02 0.00 0.68 0.02 0.05 97.23 23:17:39 23:17:01 1 6.54 0.00 0.54 0.13 0.03 92.76 23:17:39 23:17:01 2 1.88 0.00 0.74 0.18 0.05 97.15 23:17:39 23:17:01 3 1.89 0.00 0.47 0.17 0.05 97.42 23:17:39 23:17:01 4 3.41 0.00 0.64 0.12 0.07 95.77 23:17:39 23:17:01 5 1.50 0.00 0.52 7.12 0.03 90.83 23:17:39 23:17:01 6 1.77 0.00 0.47 0.30 0.03 97.42 23:17:39 23:17:01 7 1.60 0.00 0.78 0.17 0.05 97.40 23:17:39 Average: all 11.13 0.00 2.07 2.26 0.05 84.49 23:17:39 Average: 0 12.39 0.00 2.23 0.46 0.06 84.87 23:17:39 Average: 1 15.72 0.00 2.24 3.47 0.06 78.51 23:17:39 Average: 2 10.58 0.00 1.74 0.70 0.06 86.92 23:17:39 Average: 3 12.73 0.00 2.17 0.39 0.05 84.66 23:17:39 Average: 4 11.59 0.00 2.16 0.41 0.05 85.78 23:17:39 Average: 5 9.69 0.00 2.23 3.63 0.04 84.40 23:17:39 Average: 6 7.77 0.00 1.89 7.13 0.06 83.16 23:17:39 Average: 7 8.56 0.00 1.86 1.87 0.06 87.65 23:17:39 23:17:39 23:17:39