12:58:20 Started by upstream project "policy-docker-master-merge-java" build number 356 12:58:20 originally caused by: 12:58:20 Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/137830 12:58:20 Running as SYSTEM 12:58:20 [EnvInject] - Loading node environment variables. 12:58:20 Building remotely on prd-ubuntu1804-docker-8c-8g-36937 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 12:58:20 [ssh-agent] Looking for ssh-agent implementation... 12:58:20 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 12:58:20 $ ssh-agent 12:58:20 SSH_AUTH_SOCK=/tmp/ssh-g8UDEpvNBsO0/agent.2104 12:58:20 SSH_AGENT_PID=2106 12:58:20 [ssh-agent] Started. 12:58:20 Running ssh-add (command line suppressed) 12:58:20 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_13194706609034643159.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_13194706609034643159.key) 12:58:20 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 12:58:20 The recommended git tool is: NONE 12:58:22 using credential onap-jenkins-ssh 12:58:22 Wiping out workspace first. 12:58:22 Cloning the remote Git repository 12:58:22 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 12:58:22 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 12:58:22 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 12:58:22 > git --version # timeout=10 12:58:22 > git --version # 'git version 2.17.1' 12:58:22 using GIT_SSH to set credentials Gerrit user 12:58:22 Verifying host key using manually-configured host key entries 12:58:22 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 12:58:22 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 12:58:22 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 12:58:23 Avoid second fetch 12:58:23 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 12:58:23 Checking out Revision f2e84f8528911e533079d1048d2a7ab2c94826b6 (refs/remotes/origin/master) 12:58:23 > git config core.sparsecheckout # timeout=10 12:58:23 > git checkout -f f2e84f8528911e533079d1048d2a7ab2c94826b6 # timeout=30 12:58:23 Commit message: "Fix Postgres queries in clamp database migration" 12:58:23 > git rev-list --no-walk 8fadfb9667186910af1b9b6c31b9bb673057f729 # timeout=10 12:58:23 provisioning config files... 12:58:23 copy managed file [npmrc] to file:/home/jenkins/.npmrc 12:58:23 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 12:58:23 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4261784556497903012.sh 12:58:23 ---> python-tools-install.sh 12:58:23 Setup pyenv: 12:58:23 * system (set by /opt/pyenv/version) 12:58:23 * 3.8.13 (set by /opt/pyenv/version) 12:58:23 * 3.9.13 (set by /opt/pyenv/version) 12:58:23 * 3.10.6 (set by /opt/pyenv/version) 12:58:28 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-OD2Q 12:58:28 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 12:58:31 lf-activate-venv(): INFO: Installing: lftools 12:59:05 lf-activate-venv(): INFO: Adding /tmp/venv-OD2Q/bin to PATH 12:59:05 Generating Requirements File 12:59:33 Python 3.10.6 12:59:33 pip 24.0 from /tmp/venv-OD2Q/lib/python3.10/site-packages/pip (python 3.10) 12:59:34 appdirs==1.4.4 12:59:34 argcomplete==3.3.0 12:59:34 aspy.yaml==1.3.0 12:59:34 attrs==23.2.0 12:59:34 autopage==0.5.2 12:59:34 beautifulsoup4==4.12.3 12:59:34 boto3==1.34.96 12:59:34 botocore==1.34.96 12:59:34 bs4==0.0.2 12:59:34 cachetools==5.3.3 12:59:34 certifi==2024.2.2 12:59:34 cffi==1.16.0 12:59:34 cfgv==3.4.0 12:59:34 chardet==5.2.0 12:59:34 charset-normalizer==3.3.2 12:59:34 click==8.1.7 12:59:34 cliff==4.6.0 12:59:34 cmd2==2.4.3 12:59:34 cryptography==3.3.2 12:59:34 debtcollector==3.0.0 12:59:34 decorator==5.1.1 12:59:34 defusedxml==0.7.1 12:59:34 Deprecated==1.2.14 12:59:34 distlib==0.3.8 12:59:34 dnspython==2.6.1 12:59:34 docker==4.2.2 12:59:34 dogpile.cache==1.3.2 12:59:34 email_validator==2.1.1 12:59:34 filelock==3.14.0 12:59:34 future==1.0.0 12:59:34 gitdb==4.0.11 12:59:34 GitPython==3.1.43 12:59:34 google-auth==2.29.0 12:59:34 httplib2==0.22.0 12:59:34 identify==2.5.36 12:59:34 idna==3.7 12:59:34 importlib-resources==1.5.0 12:59:34 iso8601==2.1.0 12:59:34 Jinja2==3.1.3 12:59:34 jmespath==1.0.1 12:59:34 jsonpatch==1.33 12:59:34 jsonpointer==2.4 12:59:34 jsonschema==4.22.0 12:59:34 jsonschema-specifications==2023.12.1 12:59:34 keystoneauth1==5.6.0 12:59:34 kubernetes==29.0.0 12:59:34 lftools==0.37.10 12:59:34 lxml==5.2.1 12:59:34 MarkupSafe==2.1.5 12:59:34 msgpack==1.0.8 12:59:34 multi_key_dict==2.0.3 12:59:34 munch==4.0.0 12:59:34 netaddr==1.2.1 12:59:34 netifaces==0.11.0 12:59:34 niet==1.4.2 12:59:34 nodeenv==1.8.0 12:59:34 oauth2client==4.1.3 12:59:34 oauthlib==3.2.2 12:59:34 openstacksdk==3.1.0 12:59:34 os-client-config==2.1.0 12:59:34 os-service-types==1.7.0 12:59:34 osc-lib==3.0.1 12:59:34 oslo.config==9.4.0 12:59:34 oslo.context==5.5.0 12:59:34 oslo.i18n==6.3.0 12:59:34 oslo.log==5.5.1 12:59:34 oslo.serialization==5.4.0 12:59:34 oslo.utils==7.1.0 12:59:34 packaging==24.0 12:59:34 pbr==6.0.0 12:59:34 platformdirs==4.2.1 12:59:34 prettytable==3.10.0 12:59:34 pyasn1==0.6.0 12:59:34 pyasn1_modules==0.4.0 12:59:34 pycparser==2.22 12:59:34 pygerrit2==2.0.15 12:59:34 PyGithub==2.3.0 12:59:34 pyinotify==0.9.6 12:59:34 PyJWT==2.8.0 12:59:34 PyNaCl==1.5.0 12:59:34 pyparsing==2.4.7 12:59:34 pyperclip==1.8.2 12:59:34 pyrsistent==0.20.0 12:59:34 python-cinderclient==9.5.0 12:59:34 python-dateutil==2.9.0.post0 12:59:34 python-heatclient==3.5.0 12:59:34 python-jenkins==1.8.2 12:59:34 python-keystoneclient==5.4.0 12:59:34 python-magnumclient==4.4.0 12:59:34 python-novaclient==18.6.0 12:59:34 python-openstackclient==6.6.0 12:59:34 python-swiftclient==4.5.0 12:59:34 PyYAML==6.0.1 12:59:34 referencing==0.35.1 12:59:34 requests==2.31.0 12:59:34 requests-oauthlib==2.0.0 12:59:34 requestsexceptions==1.4.0 12:59:34 rfc3986==2.0.0 12:59:34 rpds-py==0.18.0 12:59:34 rsa==4.9 12:59:34 ruamel.yaml==0.18.6 12:59:34 ruamel.yaml.clib==0.2.8 12:59:34 s3transfer==0.10.1 12:59:34 simplejson==3.19.2 12:59:34 six==1.16.0 12:59:34 smmap==5.0.1 12:59:34 soupsieve==2.5 12:59:34 stevedore==5.2.0 12:59:34 tabulate==0.9.0 12:59:34 toml==0.10.2 12:59:34 tomlkit==0.12.4 12:59:34 tqdm==4.66.2 12:59:34 typing_extensions==4.11.0 12:59:34 tzdata==2024.1 12:59:34 urllib3==1.26.18 12:59:34 virtualenv==20.26.1 12:59:34 wcwidth==0.2.13 12:59:34 websocket-client==1.8.0 12:59:34 wrapt==1.16.0 12:59:34 xdg==6.0.0 12:59:34 xmltodict==0.13.0 12:59:34 yq==3.4.3 12:59:34 [EnvInject] - Injecting environment variables from a build step. 12:59:34 [EnvInject] - Injecting as environment variables the properties content 12:59:34 SET_JDK_VERSION=openjdk17 12:59:34 GIT_URL="git://cloud.onap.org/mirror" 12:59:34 12:59:34 [EnvInject] - Variables injected successfully. 12:59:34 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins16097004004911539115.sh 12:59:34 ---> update-java-alternatives.sh 12:59:34 ---> Updating Java version 12:59:34 ---> Ubuntu/Debian system detected 12:59:34 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 12:59:34 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 12:59:34 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 12:59:34 openjdk version "17.0.4" 2022-07-19 12:59:34 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 12:59:34 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 12:59:34 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 12:59:34 [EnvInject] - Injecting environment variables from a build step. 12:59:34 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 12:59:34 [EnvInject] - Variables injected successfully. 12:59:34 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins7088391735528245037.sh 12:59:34 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 12:59:34 + set +u 12:59:34 + save_set 12:59:34 + RUN_CSIT_SAVE_SET=ehxB 12:59:34 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 12:59:34 + '[' 1 -eq 0 ']' 12:59:34 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 12:59:34 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 12:59:34 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 12:59:34 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 12:59:34 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 12:59:34 + export ROBOT_VARIABLES= 12:59:34 + ROBOT_VARIABLES= 12:59:34 + export PROJECT=pap 12:59:34 + PROJECT=pap 12:59:34 + cd /w/workspace/policy-pap-master-project-csit-pap 12:59:34 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 12:59:34 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 12:59:34 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 12:59:34 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 12:59:34 + relax_set 12:59:34 + set +e 12:59:34 + set +o pipefail 12:59:34 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 12:59:34 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 12:59:34 +++ mktemp -d 12:59:34 ++ ROBOT_VENV=/tmp/tmp.IujCUsN8hu 12:59:34 ++ echo ROBOT_VENV=/tmp/tmp.IujCUsN8hu 12:59:34 +++ python3 --version 12:59:34 ++ echo 'Python version is: Python 3.6.9' 12:59:34 Python version is: Python 3.6.9 12:59:34 ++ python3 -m venv --clear /tmp/tmp.IujCUsN8hu 12:59:36 ++ source /tmp/tmp.IujCUsN8hu/bin/activate 12:59:36 +++ deactivate nondestructive 12:59:36 +++ '[' -n '' ']' 12:59:36 +++ '[' -n '' ']' 12:59:36 +++ '[' -n /bin/bash -o -n '' ']' 12:59:36 +++ hash -r 12:59:36 +++ '[' -n '' ']' 12:59:36 +++ unset VIRTUAL_ENV 12:59:36 +++ '[' '!' nondestructive = nondestructive ']' 12:59:36 +++ VIRTUAL_ENV=/tmp/tmp.IujCUsN8hu 12:59:36 +++ export VIRTUAL_ENV 12:59:36 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 12:59:36 +++ PATH=/tmp/tmp.IujCUsN8hu/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 12:59:36 +++ export PATH 12:59:36 +++ '[' -n '' ']' 12:59:36 +++ '[' -z '' ']' 12:59:36 +++ _OLD_VIRTUAL_PS1= 12:59:36 +++ '[' 'x(tmp.IujCUsN8hu) ' '!=' x ']' 12:59:36 +++ PS1='(tmp.IujCUsN8hu) ' 12:59:36 +++ export PS1 12:59:36 +++ '[' -n /bin/bash -o -n '' ']' 12:59:36 +++ hash -r 12:59:36 ++ set -exu 12:59:36 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 12:59:39 ++ echo 'Installing Python Requirements' 12:59:39 Installing Python Requirements 12:59:39 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 12:59:58 ++ python3 -m pip -qq freeze 12:59:58 bcrypt==4.0.1 12:59:58 beautifulsoup4==4.12.3 12:59:58 bitarray==2.9.2 12:59:58 certifi==2024.2.2 12:59:58 cffi==1.15.1 12:59:58 charset-normalizer==2.0.12 12:59:58 cryptography==40.0.2 12:59:58 decorator==5.1.1 12:59:58 elasticsearch==7.17.9 12:59:58 elasticsearch-dsl==7.4.1 12:59:58 enum34==1.1.10 12:59:58 idna==3.7 12:59:58 importlib-resources==5.4.0 12:59:58 ipaddr==2.2.0 12:59:58 isodate==0.6.1 12:59:58 jmespath==0.10.0 12:59:58 jsonpatch==1.32 12:59:58 jsonpath-rw==1.4.0 12:59:58 jsonpointer==2.3 12:59:58 lxml==5.2.1 12:59:58 netaddr==0.8.0 12:59:58 netifaces==0.11.0 12:59:58 odltools==0.1.28 12:59:58 paramiko==3.4.0 12:59:58 pkg_resources==0.0.0 12:59:58 ply==3.11 12:59:58 pyang==2.6.0 12:59:58 pyangbind==0.8.1 12:59:58 pycparser==2.21 12:59:58 pyhocon==0.3.60 12:59:58 PyNaCl==1.5.0 12:59:58 pyparsing==3.1.2 12:59:58 python-dateutil==2.9.0.post0 12:59:58 regex==2023.8.8 12:59:58 requests==2.27.1 12:59:58 robotframework==6.1.1 12:59:58 robotframework-httplibrary==0.4.2 12:59:58 robotframework-pythonlibcore==3.0.0 12:59:58 robotframework-requests==0.9.4 12:59:58 robotframework-selenium2library==3.0.0 12:59:58 robotframework-seleniumlibrary==5.1.3 12:59:58 robotframework-sshlibrary==3.8.0 12:59:58 scapy==2.5.0 12:59:58 scp==0.14.5 12:59:58 selenium==3.141.0 12:59:58 six==1.16.0 12:59:58 soupsieve==2.3.2.post1 12:59:58 urllib3==1.26.18 12:59:58 waitress==2.0.0 12:59:58 WebOb==1.8.7 12:59:58 WebTest==3.0.0 12:59:58 zipp==3.6.0 12:59:58 ++ mkdir -p /tmp/tmp.IujCUsN8hu/src/onap 12:59:58 ++ rm -rf /tmp/tmp.IujCUsN8hu/src/onap/testsuite 12:59:58 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 13:00:04 ++ echo 'Installing python confluent-kafka library' 13:00:04 Installing python confluent-kafka library 13:00:04 ++ python3 -m pip install -qq confluent-kafka 13:00:05 ++ echo 'Uninstall docker-py and reinstall docker.' 13:00:05 Uninstall docker-py and reinstall docker. 13:00:05 ++ python3 -m pip uninstall -y -qq docker 13:00:06 ++ python3 -m pip install -U -qq docker 13:00:07 ++ python3 -m pip -qq freeze 13:00:07 bcrypt==4.0.1 13:00:07 beautifulsoup4==4.12.3 13:00:07 bitarray==2.9.2 13:00:07 certifi==2024.2.2 13:00:07 cffi==1.15.1 13:00:07 charset-normalizer==2.0.12 13:00:07 confluent-kafka==2.3.0 13:00:07 cryptography==40.0.2 13:00:07 decorator==5.1.1 13:00:07 deepdiff==5.7.0 13:00:07 dnspython==2.2.1 13:00:07 docker==5.0.3 13:00:07 elasticsearch==7.17.9 13:00:07 elasticsearch-dsl==7.4.1 13:00:07 enum34==1.1.10 13:00:07 future==1.0.0 13:00:07 idna==3.7 13:00:07 importlib-resources==5.4.0 13:00:07 ipaddr==2.2.0 13:00:07 isodate==0.6.1 13:00:07 Jinja2==3.0.3 13:00:07 jmespath==0.10.0 13:00:07 jsonpatch==1.32 13:00:07 jsonpath-rw==1.4.0 13:00:07 jsonpointer==2.3 13:00:07 kafka-python==2.0.2 13:00:07 lxml==5.2.1 13:00:07 MarkupSafe==2.0.1 13:00:07 more-itertools==5.0.0 13:00:07 netaddr==0.8.0 13:00:07 netifaces==0.11.0 13:00:07 odltools==0.1.28 13:00:07 ordered-set==4.0.2 13:00:07 paramiko==3.4.0 13:00:07 pbr==6.0.0 13:00:07 pkg_resources==0.0.0 13:00:07 ply==3.11 13:00:07 protobuf==3.19.6 13:00:07 pyang==2.6.0 13:00:07 pyangbind==0.8.1 13:00:07 pycparser==2.21 13:00:07 pyhocon==0.3.60 13:00:07 PyNaCl==1.5.0 13:00:07 pyparsing==3.1.2 13:00:07 python-dateutil==2.9.0.post0 13:00:07 PyYAML==6.0.1 13:00:07 regex==2023.8.8 13:00:07 requests==2.27.1 13:00:07 robotframework==6.1.1 13:00:07 robotframework-httplibrary==0.4.2 13:00:07 robotframework-onap==0.6.0.dev105 13:00:07 robotframework-pythonlibcore==3.0.0 13:00:07 robotframework-requests==0.9.4 13:00:07 robotframework-selenium2library==3.0.0 13:00:07 robotframework-seleniumlibrary==5.1.3 13:00:07 robotframework-sshlibrary==3.8.0 13:00:07 robotlibcore-temp==1.0.2 13:00:07 scapy==2.5.0 13:00:07 scp==0.14.5 13:00:07 selenium==3.141.0 13:00:07 six==1.16.0 13:00:07 soupsieve==2.3.2.post1 13:00:07 urllib3==1.26.18 13:00:07 waitress==2.0.0 13:00:07 WebOb==1.8.7 13:00:07 websocket-client==1.3.1 13:00:07 WebTest==3.0.0 13:00:07 zipp==3.6.0 13:00:07 ++ uname 13:00:07 ++ grep -q Linux 13:00:07 ++ sudo apt-get -y -qq install libxml2-utils 13:00:07 + load_set 13:00:07 + _setopts=ehuxB 13:00:07 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 13:00:07 ++ tr : ' ' 13:00:07 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:00:07 + set +o braceexpand 13:00:07 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:00:07 + set +o hashall 13:00:07 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:00:07 + set +o interactive-comments 13:00:07 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:00:07 + set +o nounset 13:00:07 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:00:07 + set +o xtrace 13:00:07 ++ echo ehuxB 13:00:07 ++ sed 's/./& /g' 13:00:07 + for i in $(echo "$_setopts" | sed 's/./& /g') 13:00:07 + set +e 13:00:07 + for i in $(echo "$_setopts" | sed 's/./& /g') 13:00:07 + set +h 13:00:07 + for i in $(echo "$_setopts" | sed 's/./& /g') 13:00:07 + set +u 13:00:07 + for i in $(echo "$_setopts" | sed 's/./& /g') 13:00:07 + set +x 13:00:07 + source_safely /tmp/tmp.IujCUsN8hu/bin/activate 13:00:07 + '[' -z /tmp/tmp.IujCUsN8hu/bin/activate ']' 13:00:07 + relax_set 13:00:07 + set +e 13:00:07 + set +o pipefail 13:00:07 + . /tmp/tmp.IujCUsN8hu/bin/activate 13:00:07 ++ deactivate nondestructive 13:00:07 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 13:00:07 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 13:00:07 ++ export PATH 13:00:07 ++ unset _OLD_VIRTUAL_PATH 13:00:07 ++ '[' -n '' ']' 13:00:07 ++ '[' -n /bin/bash -o -n '' ']' 13:00:07 ++ hash -r 13:00:07 ++ '[' -n '' ']' 13:00:07 ++ unset VIRTUAL_ENV 13:00:07 ++ '[' '!' nondestructive = nondestructive ']' 13:00:07 ++ VIRTUAL_ENV=/tmp/tmp.IujCUsN8hu 13:00:07 ++ export VIRTUAL_ENV 13:00:07 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 13:00:07 ++ PATH=/tmp/tmp.IujCUsN8hu/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 13:00:07 ++ export PATH 13:00:07 ++ '[' -n '' ']' 13:00:07 ++ '[' -z '' ']' 13:00:07 ++ _OLD_VIRTUAL_PS1='(tmp.IujCUsN8hu) ' 13:00:07 ++ '[' 'x(tmp.IujCUsN8hu) ' '!=' x ']' 13:00:07 ++ PS1='(tmp.IujCUsN8hu) (tmp.IujCUsN8hu) ' 13:00:07 ++ export PS1 13:00:07 ++ '[' -n /bin/bash -o -n '' ']' 13:00:07 ++ hash -r 13:00:07 + load_set 13:00:07 + _setopts=hxB 13:00:07 ++ tr : ' ' 13:00:07 ++ echo braceexpand:hashall:interactive-comments:xtrace 13:00:07 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:00:07 + set +o braceexpand 13:00:07 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:00:07 + set +o hashall 13:00:07 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:00:07 + set +o interactive-comments 13:00:07 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:00:07 + set +o xtrace 13:00:07 ++ echo hxB 13:00:07 ++ sed 's/./& /g' 13:00:07 + for i in $(echo "$_setopts" | sed 's/./& /g') 13:00:07 + set +h 13:00:07 + for i in $(echo "$_setopts" | sed 's/./& /g') 13:00:07 + set +x 13:00:07 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 13:00:07 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 13:00:07 + export TEST_OPTIONS= 13:00:07 + TEST_OPTIONS= 13:00:07 ++ mktemp -d 13:00:07 + WORKDIR=/tmp/tmp.o3nGSSjcc7 13:00:07 + cd /tmp/tmp.o3nGSSjcc7 13:00:07 + docker login -u docker -p docker nexus3.onap.org:10001 13:00:08 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 13:00:08 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 13:00:08 Configure a credential helper to remove this warning. See 13:00:08 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 13:00:08 13:00:08 Login Succeeded 13:00:08 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 13:00:08 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 13:00:08 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 13:00:08 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 13:00:08 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 13:00:08 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 13:00:08 + relax_set 13:00:08 + set +e 13:00:08 + set +o pipefail 13:00:08 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 13:00:08 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 13:00:08 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 13:00:08 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 13:00:08 +++ GERRIT_BRANCH=master 13:00:08 +++ echo GERRIT_BRANCH=master 13:00:08 GERRIT_BRANCH=master 13:00:08 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 13:00:08 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 13:00:08 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 13:00:08 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 13:00:09 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 13:00:09 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 13:00:09 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 13:00:09 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 13:00:09 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 13:00:09 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 13:00:09 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 13:00:09 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 13:00:09 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 13:00:09 +++ grafana=false 13:00:09 +++ gui=false 13:00:09 +++ [[ 2 -gt 0 ]] 13:00:09 +++ key=apex-pdp 13:00:09 +++ case $key in 13:00:09 +++ echo apex-pdp 13:00:09 apex-pdp 13:00:09 +++ component=apex-pdp 13:00:09 +++ shift 13:00:09 +++ [[ 1 -gt 0 ]] 13:00:09 +++ key=--grafana 13:00:09 +++ case $key in 13:00:09 +++ grafana=true 13:00:09 +++ shift 13:00:09 +++ [[ 0 -gt 0 ]] 13:00:09 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 13:00:09 +++ echo 'Configuring docker compose...' 13:00:09 Configuring docker compose... 13:00:09 +++ source export-ports.sh 13:00:09 +++ source get-versions.sh 13:00:12 +++ '[' -z pap ']' 13:00:12 +++ '[' -n apex-pdp ']' 13:00:12 +++ '[' apex-pdp == logs ']' 13:00:12 +++ '[' true = true ']' 13:00:12 +++ echo 'Starting apex-pdp application with Grafana' 13:00:12 Starting apex-pdp application with Grafana 13:00:12 +++ docker-compose up -d apex-pdp grafana 13:00:12 Creating network "compose_default" with the default driver 13:00:13 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 13:00:13 latest: Pulling from prom/prometheus 13:00:16 Digest: sha256:4f6c47e39a9064028766e8c95890ed15690c30f00c4ba14e7ce6ae1ded0295b1 13:00:16 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 13:00:16 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 13:00:16 latest: Pulling from grafana/grafana 13:00:22 Digest: sha256:7d5faae481a4c6f436c99e98af11534f7fd5e8d3e35213552dd1dd02bc393d2e 13:00:22 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 13:00:22 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 13:00:22 10.10.2: Pulling from mariadb 13:00:30 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 13:00:30 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 13:00:35 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.3-SNAPSHOT)... 13:00:35 3.1.3-SNAPSHOT: Pulling from onap/policy-models-simulator 13:00:40 Digest: sha256:f41ae0e698a7eee4268ba3d29c141e50ab86dbca0876f787d3d80e16d6bffd9e 13:00:40 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.3-SNAPSHOT 13:00:40 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 13:00:40 latest: Pulling from confluentinc/cp-zookeeper 13:00:55 Digest: sha256:4dc780642bfc5ec3a2d4901e2ff1f9ddef7f7c5c0b793e1e2911cbfb4e3a3214 13:00:55 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 13:00:55 Pulling kafka (confluentinc/cp-kafka:latest)... 13:00:56 latest: Pulling from confluentinc/cp-kafka 13:00:58 Digest: sha256:620734d9fc0bb1f9886932e5baf33806074469f40e3fe246a3fdbb59309535fa 13:00:58 Status: Downloaded newer image for confluentinc/cp-kafka:latest 13:00:58 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.3-SNAPSHOT)... 13:00:58 3.1.3-SNAPSHOT: Pulling from onap/policy-db-migrator 13:01:04 Digest: sha256:5d7952b935efae68db532aa9bf4a7451f913c2febbcb55d78ebb900490cdf742 13:01:04 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.3-SNAPSHOT 13:01:04 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.3-SNAPSHOT)... 13:01:04 3.1.3-SNAPSHOT: Pulling from onap/policy-api 13:01:06 Digest: sha256:7fad0e07e4ad14d7b1ec6aec34f8583031a00f072037db0e6764795a9c95f7fd 13:01:06 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.3-SNAPSHOT 13:01:06 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.3-SNAPSHOT)... 13:01:06 3.1.3-SNAPSHOT: Pulling from onap/policy-pap 13:01:10 Digest: sha256:7f3b58c4f9b75937b65a0c67c12bb88aa2c134f077126cfa8a21b501b6bc004c 13:01:10 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.3-SNAPSHOT 13:01:10 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.3-SNAPSHOT)... 13:01:10 3.1.3-SNAPSHOT: Pulling from onap/policy-apex-pdp 13:01:22 Digest: sha256:8770653266299381ba06ecf1ac20de5cc32cd747d987933c80da099704d6db0f 13:01:22 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.3-SNAPSHOT 13:01:22 Creating zookeeper ... 13:01:22 Creating prometheus ... 13:01:22 Creating simulator ... 13:01:22 Creating mariadb ... 13:01:36 Creating simulator ... done 13:01:37 Creating mariadb ... done 13:01:37 Creating policy-db-migrator ... 13:01:38 Creating policy-db-migrator ... done 13:01:38 Creating policy-api ... 13:01:38 Creating policy-api ... done 13:01:40 Creating zookeeper ... done 13:01:40 Creating kafka ... 13:01:41 Creating kafka ... done 13:01:41 Creating policy-pap ... 13:01:42 Creating prometheus ... done 13:01:42 Creating grafana ... 13:01:43 Creating grafana ... done 13:01:44 Creating policy-pap ... done 13:01:44 Creating policy-apex-pdp ... 13:01:45 Creating policy-apex-pdp ... done 13:01:45 +++ echo 'Prometheus server: http://localhost:30259' 13:01:45 Prometheus server: http://localhost:30259 13:01:45 +++ echo 'Grafana server: http://localhost:30269' 13:01:45 Grafana server: http://localhost:30269 13:01:45 +++ cd /w/workspace/policy-pap-master-project-csit-pap 13:01:45 ++ sleep 10 13:01:55 ++ unset http_proxy https_proxy 13:01:55 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 13:01:55 Waiting for REST to come up on localhost port 30003... 13:01:55 NAMES STATUS 13:01:55 policy-apex-pdp Up 10 seconds 13:01:55 grafana Up 12 seconds 13:01:55 policy-pap Up 11 seconds 13:01:55 kafka Up 14 seconds 13:01:55 policy-api Up 16 seconds 13:01:55 simulator Up 19 seconds 13:01:55 mariadb Up 18 seconds 13:01:55 prometheus Up 13 seconds 13:01:55 zookeeper Up 15 seconds 13:02:00 NAMES STATUS 13:02:00 policy-apex-pdp Up 15 seconds 13:02:00 grafana Up 17 seconds 13:02:00 policy-pap Up 16 seconds 13:02:00 kafka Up 19 seconds 13:02:00 policy-api Up 21 seconds 13:02:00 simulator Up 24 seconds 13:02:00 mariadb Up 23 seconds 13:02:00 prometheus Up 18 seconds 13:02:00 zookeeper Up 20 seconds 13:02:05 NAMES STATUS 13:02:05 policy-apex-pdp Up 20 seconds 13:02:05 grafana Up 22 seconds 13:02:05 policy-pap Up 21 seconds 13:02:05 kafka Up 24 seconds 13:02:05 policy-api Up 26 seconds 13:02:05 simulator Up 29 seconds 13:02:05 mariadb Up 28 seconds 13:02:05 prometheus Up 23 seconds 13:02:05 zookeeper Up 25 seconds 13:02:10 NAMES STATUS 13:02:10 policy-apex-pdp Up 25 seconds 13:02:10 grafana Up 27 seconds 13:02:10 policy-pap Up 26 seconds 13:02:10 kafka Up 29 seconds 13:02:10 policy-api Up 31 seconds 13:02:10 simulator Up 34 seconds 13:02:10 mariadb Up 33 seconds 13:02:10 prometheus Up 28 seconds 13:02:10 zookeeper Up 30 seconds 13:02:15 NAMES STATUS 13:02:15 policy-apex-pdp Up 30 seconds 13:02:15 grafana Up 32 seconds 13:02:15 policy-pap Up 31 seconds 13:02:15 kafka Up 34 seconds 13:02:15 policy-api Up 36 seconds 13:02:15 simulator Up 39 seconds 13:02:15 mariadb Up 38 seconds 13:02:15 prometheus Up 33 seconds 13:02:15 zookeeper Up 35 seconds 13:02:15 ++ export 'SUITES=pap-test.robot 13:02:15 pap-slas.robot' 13:02:15 ++ SUITES='pap-test.robot 13:02:15 pap-slas.robot' 13:02:15 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 13:02:15 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 13:02:15 + load_set 13:02:15 + _setopts=hxB 13:02:15 ++ echo braceexpand:hashall:interactive-comments:xtrace 13:02:15 ++ tr : ' ' 13:02:15 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:02:15 + set +o braceexpand 13:02:15 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:02:15 + set +o hashall 13:02:15 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:02:15 + set +o interactive-comments 13:02:15 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:02:15 + set +o xtrace 13:02:15 ++ sed 's/./& /g' 13:02:15 ++ echo hxB 13:02:15 + for i in $(echo "$_setopts" | sed 's/./& /g') 13:02:15 + set +h 13:02:15 + for i in $(echo "$_setopts" | sed 's/./& /g') 13:02:15 + set +x 13:02:15 + docker_stats 13:02:15 ++ uname -s 13:02:15 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 13:02:15 + '[' Linux == Darwin ']' 13:02:15 + sh -c 'top -bn1 | head -3' 13:02:15 top - 13:02:15 up 4 min, 0 users, load average: 3.20, 1.43, 0.57 13:02:15 Tasks: 208 total, 1 running, 131 sleeping, 0 stopped, 0 zombie 13:02:15 %Cpu(s): 13.1 us, 2.7 sy, 0.0 ni, 79.1 id, 4.9 wa, 0.0 hi, 0.1 si, 0.1 st 13:02:15 + echo 13:02:15 13:02:15 + sh -c 'free -h' 13:02:15 total used free shared buff/cache available 13:02:15 Mem: 31G 2.5G 22G 1.3M 6.2G 28G 13:02:15 Swap: 1.0G 0B 1.0G 13:02:15 + echo 13:02:15 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 13:02:15 13:02:15 NAMES STATUS 13:02:15 policy-apex-pdp Up 30 seconds 13:02:15 grafana Up 32 seconds 13:02:15 policy-pap Up 31 seconds 13:02:15 kafka Up 34 seconds 13:02:15 policy-api Up 36 seconds 13:02:15 simulator Up 39 seconds 13:02:15 mariadb Up 38 seconds 13:02:15 prometheus Up 33 seconds 13:02:15 zookeeper Up 35 seconds 13:02:15 + echo 13:02:15 + docker stats --no-stream 13:02:15 13:02:18 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 13:02:18 0194fc8575df policy-apex-pdp 198.65% 175.6MiB / 31.41GiB 0.55% 7.14kB / 6.85kB 0B / 0B 48 13:02:18 e5aaaeca65e2 grafana 0.03% 52.25MiB / 31.41GiB 0.16% 18.8kB / 3.12kB 0B / 24.9MB 18 13:02:18 f163b5467bf6 policy-pap 2.71% 515.8MiB / 31.41GiB 1.60% 31.3kB / 32.7kB 0B / 149MB 62 13:02:18 c2f66168851e kafka 23.13% 373.4MiB / 31.41GiB 1.16% 66.8kB / 69.7kB 0B / 508kB 83 13:02:18 f43fa25deceb policy-api 0.12% 433.2MiB / 31.41GiB 1.35% 988kB / 646kB 0B / 0B 52 13:02:18 1f164baf00c3 simulator 0.08% 119.6MiB / 31.41GiB 0.37% 1.63kB / 0B 4.1kB / 0B 76 13:02:18 87deb0f0cbfd mariadb 0.02% 102.4MiB / 31.41GiB 0.32% 934kB / 1.18MB 11MB / 63.6MB 41 13:02:18 352203b32900 prometheus 0.05% 18.97MiB / 31.41GiB 0.06% 1.28kB / 158B 0B / 0B 12 13:02:18 3d9e3ebcb179 zookeeper 0.07% 96.44MiB / 31.41GiB 0.30% 52kB / 46.3kB 98.3kB / 365kB 59 13:02:18 + echo 13:02:18 13:02:18 + cd /tmp/tmp.o3nGSSjcc7 13:02:18 + echo 'Reading the testplan:' 13:02:18 Reading the testplan: 13:02:18 + echo 'pap-test.robot 13:02:18 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 13:02:18 pap-slas.robot' 13:02:18 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 13:02:18 + cat testplan.txt 13:02:18 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 13:02:18 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 13:02:18 ++ xargs 13:02:18 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 13:02:18 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 13:02:18 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 13:02:18 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 13:02:18 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 13:02:18 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 13:02:18 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 13:02:18 + relax_set 13:02:18 + set +e 13:02:18 + set +o pipefail 13:02:18 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 13:02:18 ============================================================================== 13:02:18 pap 13:02:18 ============================================================================== 13:02:18 pap.Pap-Test 13:02:18 ============================================================================== 13:02:19 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 13:02:19 ------------------------------------------------------------------------------ 13:02:20 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 13:02:20 ------------------------------------------------------------------------------ 13:02:20 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 13:02:20 ------------------------------------------------------------------------------ 13:02:21 Healthcheck :: Verify policy pap health check | PASS | 13:02:21 ------------------------------------------------------------------------------ 13:02:41 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 13:02:41 ------------------------------------------------------------------------------ 13:02:41 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 13:02:41 ------------------------------------------------------------------------------ 13:02:42 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 13:02:42 ------------------------------------------------------------------------------ 13:02:42 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 13:02:42 ------------------------------------------------------------------------------ 13:02:42 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 13:02:42 ------------------------------------------------------------------------------ 13:02:43 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 13:02:43 ------------------------------------------------------------------------------ 13:02:43 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 13:02:43 ------------------------------------------------------------------------------ 13:02:43 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 13:02:43 ------------------------------------------------------------------------------ 13:02:43 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 13:02:43 ------------------------------------------------------------------------------ 13:02:43 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 13:02:43 ------------------------------------------------------------------------------ 13:02:44 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 13:02:44 ------------------------------------------------------------------------------ 13:02:44 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 13:02:44 ------------------------------------------------------------------------------ 13:02:44 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 13:02:44 ------------------------------------------------------------------------------ 13:03:04 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 13:03:04 ------------------------------------------------------------------------------ 13:03:04 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 13:03:04 ------------------------------------------------------------------------------ 13:03:05 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 13:03:05 ------------------------------------------------------------------------------ 13:03:05 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 13:03:05 ------------------------------------------------------------------------------ 13:03:05 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 13:03:05 ------------------------------------------------------------------------------ 13:03:05 pap.Pap-Test | PASS | 13:03:05 22 tests, 22 passed, 0 failed 13:03:05 ============================================================================== 13:03:05 pap.Pap-Slas 13:03:05 ============================================================================== 13:04:05 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 13:04:05 ------------------------------------------------------------------------------ 13:04:05 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 13:04:05 ------------------------------------------------------------------------------ 13:04:05 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 13:04:05 ------------------------------------------------------------------------------ 13:04:05 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 13:04:05 ------------------------------------------------------------------------------ 13:04:05 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 13:04:05 ------------------------------------------------------------------------------ 13:04:05 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 13:04:05 ------------------------------------------------------------------------------ 13:04:05 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 13:04:05 ------------------------------------------------------------------------------ 13:04:05 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 13:04:05 ------------------------------------------------------------------------------ 13:04:05 pap.Pap-Slas | PASS | 13:04:05 8 tests, 8 passed, 0 failed 13:04:05 ============================================================================== 13:04:05 pap | PASS | 13:04:05 30 tests, 30 passed, 0 failed 13:04:05 ============================================================================== 13:04:05 Output: /tmp/tmp.o3nGSSjcc7/output.xml 13:04:05 Log: /tmp/tmp.o3nGSSjcc7/log.html 13:04:05 Report: /tmp/tmp.o3nGSSjcc7/report.html 13:04:05 + RESULT=0 13:04:05 + load_set 13:04:05 + _setopts=hxB 13:04:05 ++ echo braceexpand:hashall:interactive-comments:xtrace 13:04:05 ++ tr : ' ' 13:04:05 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:04:05 + set +o braceexpand 13:04:05 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:04:05 + set +o hashall 13:04:05 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:04:05 + set +o interactive-comments 13:04:05 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:04:05 + set +o xtrace 13:04:05 ++ echo hxB 13:04:05 ++ sed 's/./& /g' 13:04:05 + for i in $(echo "$_setopts" | sed 's/./& /g') 13:04:05 + set +h 13:04:05 + for i in $(echo "$_setopts" | sed 's/./& /g') 13:04:05 + set +x 13:04:05 + echo 'RESULT: 0' 13:04:05 RESULT: 0 13:04:05 + exit 0 13:04:05 + on_exit 13:04:05 + rc=0 13:04:05 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 13:04:05 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 13:04:05 NAMES STATUS 13:04:05 policy-apex-pdp Up 2 minutes 13:04:05 grafana Up 2 minutes 13:04:05 policy-pap Up 2 minutes 13:04:05 kafka Up 2 minutes 13:04:05 policy-api Up 2 minutes 13:04:05 simulator Up 2 minutes 13:04:05 mariadb Up 2 minutes 13:04:05 prometheus Up 2 minutes 13:04:05 zookeeper Up 2 minutes 13:04:05 + docker_stats 13:04:05 ++ uname -s 13:04:05 + '[' Linux == Darwin ']' 13:04:05 + sh -c 'top -bn1 | head -3' 13:04:05 top - 13:04:05 up 6 min, 0 users, load average: 0.74, 1.12, 0.55 13:04:05 Tasks: 196 total, 1 running, 129 sleeping, 0 stopped, 0 zombie 13:04:05 %Cpu(s): 10.9 us, 2.1 sy, 0.0 ni, 83.1 id, 3.8 wa, 0.0 hi, 0.1 si, 0.1 st 13:04:05 + echo 13:04:05 13:04:05 + sh -c 'free -h' 13:04:05 total used free shared buff/cache available 13:04:05 Mem: 31G 2.7G 22G 1.3M 6.2G 28G 13:04:05 Swap: 1.0G 0B 1.0G 13:04:05 + echo 13:04:05 13:04:05 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 13:04:05 NAMES STATUS 13:04:05 policy-apex-pdp Up 2 minutes 13:04:05 grafana Up 2 minutes 13:04:05 policy-pap Up 2 minutes 13:04:05 kafka Up 2 minutes 13:04:05 policy-api Up 2 minutes 13:04:05 simulator Up 2 minutes 13:04:05 mariadb Up 2 minutes 13:04:05 prometheus Up 2 minutes 13:04:05 zookeeper Up 2 minutes 13:04:05 + echo 13:04:05 13:04:05 + docker stats --no-stream 13:04:08 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 13:04:08 0194fc8575df policy-apex-pdp 1.36% 180MiB / 31.41GiB 0.56% 56kB / 90.2kB 0B / 0B 52 13:04:08 e5aaaeca65e2 grafana 0.07% 59.85MiB / 31.41GiB 0.19% 19.9kB / 4.34kB 0B / 24.9MB 18 13:04:08 f163b5467bf6 policy-pap 1.32% 544.2MiB / 31.41GiB 1.69% 2.47MB / 1.04MB 0B / 149MB 66 13:04:08 c2f66168851e kafka 1.02% 397.7MiB / 31.41GiB 1.24% 234kB / 210kB 0B / 606kB 85 13:04:08 f43fa25deceb policy-api 0.12% 442.1MiB / 31.41GiB 1.37% 2.45MB / 1.1MB 0B / 0B 55 13:04:08 1f164baf00c3 simulator 0.11% 119.9MiB / 31.41GiB 0.37% 1.94kB / 0B 4.1kB / 0B 78 13:04:08 87deb0f0cbfd mariadb 0.01% 103.6MiB / 31.41GiB 0.32% 2.02MB / 4.87MB 11MB / 63.8MB 28 13:04:08 352203b32900 prometheus 0.05% 24.88MiB / 31.41GiB 0.08% 184kB / 10.7kB 0B / 0B 12 13:04:08 3d9e3ebcb179 zookeeper 0.14% 95.81MiB / 31.41GiB 0.30% 54.9kB / 47.8kB 98.3kB / 365kB 59 13:04:08 + echo 13:04:08 13:04:08 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 13:04:08 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 13:04:08 + relax_set 13:04:08 + set +e 13:04:08 + set +o pipefail 13:04:08 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 13:04:08 ++ echo 'Shut down started!' 13:04:08 Shut down started! 13:04:08 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 13:04:08 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 13:04:08 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 13:04:08 ++ source export-ports.sh 13:04:08 ++ source get-versions.sh 13:04:11 ++ echo 'Collecting logs from docker compose containers...' 13:04:11 Collecting logs from docker compose containers... 13:04:11 ++ docker-compose logs 13:04:12 ++ cat docker_compose.log 13:04:12 Attaching to policy-apex-pdp, grafana, policy-pap, kafka, policy-api, policy-db-migrator, simulator, mariadb, prometheus, zookeeper 13:04:12 grafana | logger=settings t=2024-05-02T13:01:43.244509505Z level=info msg="Starting Grafana" version=10.4.2 commit=701c851be7a930e04fbc6ebb1cd4254da80edd4c branch=v10.4.x compiled=2024-05-02T13:01:43Z 13:04:12 grafana | logger=settings t=2024-05-02T13:01:43.245096853Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 13:04:12 grafana | logger=settings t=2024-05-02T13:01:43.245111794Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 13:04:12 grafana | logger=settings t=2024-05-02T13:01:43.245116364Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 13:04:12 grafana | logger=settings t=2024-05-02T13:01:43.245120694Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 13:04:12 grafana | logger=settings t=2024-05-02T13:01:43.245124854Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 13:04:12 grafana | logger=settings t=2024-05-02T13:01:43.245128504Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 13:04:12 grafana | logger=settings t=2024-05-02T13:01:43.245132744Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 13:04:12 grafana | logger=settings t=2024-05-02T13:01:43.245136014Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 13:04:12 grafana | logger=settings t=2024-05-02T13:01:43.245140034Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 13:04:12 grafana | logger=settings t=2024-05-02T13:01:43.245143424Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 13:04:12 grafana | logger=settings t=2024-05-02T13:01:43.245147724Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 13:04:12 grafana | logger=settings t=2024-05-02T13:01:43.245158224Z level=info msg=Target target=[all] 13:04:12 grafana | logger=settings t=2024-05-02T13:01:43.245224955Z level=info msg="Path Home" path=/usr/share/grafana 13:04:12 grafana | logger=settings t=2024-05-02T13:01:43.245229745Z level=info msg="Path Data" path=/var/lib/grafana 13:04:12 grafana | logger=settings t=2024-05-02T13:01:43.245233235Z level=info msg="Path Logs" path=/var/log/grafana 13:04:12 grafana | logger=settings t=2024-05-02T13:01:43.245238025Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 13:04:12 grafana | logger=settings t=2024-05-02T13:01:43.245241595Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 13:04:12 grafana | logger=settings t=2024-05-02T13:01:43.245245156Z level=info msg="App mode production" 13:04:12 grafana | logger=sqlstore t=2024-05-02T13:01:43.245717812Z level=info msg="Connecting to DB" dbtype=sqlite3 13:04:12 grafana | logger=sqlstore t=2024-05-02T13:01:43.245742483Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.246618935Z level=info msg="Starting DB migrations" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.247868113Z level=info msg="Executing migration" id="create migration_log table" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.248889598Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.020585ms 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.255393502Z level=info msg="Executing migration" id="create user table" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.256297945Z level=info msg="Migration successfully executed" id="create user table" duration=904.163µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.261042363Z level=info msg="Executing migration" id="add unique index user.login" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.262407523Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.36567ms 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.271287991Z level=info msg="Executing migration" id="add unique index user.email" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.272148274Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=859.823µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.276298753Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.277161296Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=862.573µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.28157761Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.282379751Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=801.931µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.289449693Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.293228748Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.779395ms 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.296538435Z level=info msg="Executing migration" id="create user table v2" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.297391018Z level=info msg="Migration successfully executed" id="create user table v2" duration=852.103µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.299387366Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.300157998Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=770.412µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.302959068Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.30381191Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=852.512µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.308604429Z level=info msg="Executing migration" id="copy data_source v1 to v2" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.309081726Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=476.667µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.311121196Z level=info msg="Executing migration" id="Drop old table user_v1" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.311685884Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=562.478µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.314925751Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.316908949Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.982058ms 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.321417004Z level=info msg="Executing migration" id="Update user table charset" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.321443815Z level=info msg="Migration successfully executed" id="Update user table charset" duration=27.631µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.324085963Z level=info msg="Executing migration" id="Add last_seen_at column to user" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.325190549Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.104176ms 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.327195908Z level=info msg="Executing migration" id="Add missing user data" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.327513672Z level=info msg="Migration successfully executed" id="Add missing user data" duration=317.654µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.330441014Z level=info msg="Executing migration" id="Add is_disabled column to user" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.332446383Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=2.003919ms 13:04:12 kafka | ===> User 13:04:12 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 13:04:12 kafka | ===> Configuring ... 13:04:12 kafka | Running in Zookeeper mode... 13:04:12 kafka | ===> Running preflight checks ... 13:04:12 kafka | ===> Check if /var/lib/kafka/data is writable ... 13:04:12 kafka | ===> Check if Zookeeper is healthy ... 13:04:12 kafka | [2024-05-02 13:01:45,567] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:45,568] INFO Client environment:host.name=c2f66168851e (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:45,568] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:45,568] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:45,568] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:45,568] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.1-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.6.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.1-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.1-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.6.1.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.1.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.1-ccs.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.1-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.1-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:45,568] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:45,568] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:45,568] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:45,568] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:45,568] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:45,568] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:45,568] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:45,568] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:45,568] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:45,568] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:45,568] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:45,568] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:45,571] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@b7f23d9 (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:45,574] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 13:04:12 kafka | [2024-05-02 13:01:45,578] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 13:04:12 kafka | [2024-05-02 13:01:45,586] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 13:04:12 kafka | [2024-05-02 13:01:45,603] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 13:04:12 kafka | [2024-05-02 13:01:45,604] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 13:04:12 kafka | [2024-05-02 13:01:45,613] INFO Socket connection established, initiating session, client: /172.17.0.8:39750, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 13:04:12 kafka | [2024-05-02 13:01:45,643] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000003d1650000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 13:04:12 kafka | [2024-05-02 13:01:45,762] INFO Session: 0x1000003d1650000 closed (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:45,762] INFO EventThread shut down for session: 0x1000003d1650000 (org.apache.zookeeper.ClientCnxn) 13:04:12 kafka | Using log4j config /etc/kafka/log4j.properties 13:04:12 kafka | ===> Launching ... 13:04:12 kafka | ===> Launching kafka ... 13:04:12 kafka | [2024-05-02 13:01:46,505] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 13:04:12 kafka | [2024-05-02 13:01:46,848] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 13:04:12 kafka | [2024-05-02 13:01:46,946] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 13:04:12 kafka | [2024-05-02 13:01:46,948] INFO starting (kafka.server.KafkaServer) 13:04:12 kafka | [2024-05-02 13:01:46,948] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 13:04:12 kafka | [2024-05-02 13:01:46,964] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 13:04:12 kafka | [2024-05-02 13:01:46,969] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:46,969] INFO Client environment:host.name=c2f66168851e (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:46,969] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.368391892Z level=info msg="Executing migration" id="Add index user.login/user.email" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.369782972Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.38799ms 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.374210206Z level=info msg="Executing migration" id="Add is_service_account column to user" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.375429203Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.218207ms 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.379089456Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.386917859Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=7.827623ms 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.390028264Z level=info msg="Executing migration" id="Add uid column to user" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.391252292Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.222758ms 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.39736143Z level=info msg="Executing migration" id="Update uid column values for users" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.397621213Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=270.104µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.400786239Z level=info msg="Executing migration" id="Add unique index user_uid" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.402081538Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.295979ms 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.405766121Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.406336399Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=569.508µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.409548185Z level=info msg="Executing migration" id="create temp user table v1-7" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.410425078Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=874.503µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.415145876Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.415931928Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=785.902µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.418643177Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.419483619Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=838.752µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.424340719Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.425169751Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=827.682µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.428195794Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.429012496Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=816.632µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.431715285Z level=info msg="Executing migration" id="Update temp_user table charset" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.431740536Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=26.091µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.434678968Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.435855035Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.174717ms 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.440520072Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.441790831Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.271168ms 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.444871305Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.445624896Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=752.101µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.448416846Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.449136066Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=719.48µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.454647166Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.459920332Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=5.269536ms 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.462877775Z level=info msg="Executing migration" id="create temp_user v2" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.463792858Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=913.853µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.466611249Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.467458241Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=848.703µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.472123838Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.47294183Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=815.882µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.475075251Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.475922893Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=847.252µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.478812885Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 13:04:12 mariadb | 2024-05-02 13:01:37+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 13:04:12 mariadb | 2024-05-02 13:01:37+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 13:04:12 mariadb | 2024-05-02 13:01:37+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 13:04:12 mariadb | 2024-05-02 13:01:37+00:00 [Note] [Entrypoint]: Initializing database files 13:04:12 mariadb | 2024-05-02 13:01:37 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 13:04:12 mariadb | 2024-05-02 13:01:37 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 13:04:12 mariadb | 2024-05-02 13:01:37 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 13:04:12 mariadb | 13:04:12 mariadb | 13:04:12 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 13:04:12 mariadb | To do so, start the server, then issue the following command: 13:04:12 mariadb | 13:04:12 mariadb | '/usr/bin/mysql_secure_installation' 13:04:12 mariadb | 13:04:12 mariadb | which will also give you the option of removing the test 13:04:12 mariadb | databases and anonymous user created by default. This is 13:04:12 mariadb | strongly recommended for production servers. 13:04:12 mariadb | 13:04:12 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 13:04:12 mariadb | 13:04:12 mariadb | Please report any problems at https://mariadb.org/jira 13:04:12 mariadb | 13:04:12 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 13:04:12 mariadb | 13:04:12 mariadb | Consider joining MariaDB's strong and vibrant community: 13:04:12 mariadb | https://mariadb.org/get-involved/ 13:04:12 mariadb | 13:04:12 mariadb | 2024-05-02 13:01:38+00:00 [Note] [Entrypoint]: Database files initialized 13:04:12 mariadb | 2024-05-02 13:01:38+00:00 [Note] [Entrypoint]: Starting temporary server 13:04:12 mariadb | 2024-05-02 13:01:38+00:00 [Note] [Entrypoint]: Waiting for server startup 13:04:12 mariadb | 2024-05-02 13:01:38 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 98 ... 13:04:12 mariadb | 2024-05-02 13:01:38 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 13:04:12 mariadb | 2024-05-02 13:01:38 0 [Note] InnoDB: Number of transaction pools: 1 13:04:12 mariadb | 2024-05-02 13:01:38 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 13:04:12 mariadb | 2024-05-02 13:01:38 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 13:04:12 mariadb | 2024-05-02 13:01:38 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.479662967Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=850.022µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.484503667Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.484946453Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=442.146µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.487181595Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.487753914Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=570.418µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.490606425Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 13:04:12 mariadb | 2024-05-02 13:01:38 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 13:04:12 mariadb | 2024-05-02 13:01:38 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 13:04:12 mariadb | 2024-05-02 13:01:38 0 [Note] InnoDB: Completed initialization of buffer pool 13:04:12 mariadb | 2024-05-02 13:01:38 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 13:04:12 mariadb | 2024-05-02 13:01:38 0 [Note] InnoDB: 128 rollback segments are active. 13:04:12 mariadb | 2024-05-02 13:01:38 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 13:04:12 mariadb | 2024-05-02 13:01:38 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 13:04:12 mariadb | 2024-05-02 13:01:38 0 [Note] InnoDB: log sequence number 46590; transaction id 14 13:04:12 mariadb | 2024-05-02 13:01:38 0 [Note] Plugin 'FEEDBACK' is disabled. 13:04:12 mariadb | 2024-05-02 13:01:38 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 13:04:12 mariadb | 2024-05-02 13:01:38 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 13:04:12 mariadb | 2024-05-02 13:01:38 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 13:04:12 mariadb | 2024-05-02 13:01:38 0 [Note] mariadbd: ready for connections. 13:04:12 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 13:04:12 mariadb | 2024-05-02 13:01:39+00:00 [Note] [Entrypoint]: Temporary server started. 13:04:12 mariadb | 2024-05-02 13:01:41+00:00 [Note] [Entrypoint]: Creating user policy_user 13:04:12 mariadb | 2024-05-02 13:01:41+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 13:04:12 mariadb | 13:04:12 mariadb | 13:04:12 mariadb | 2024-05-02 13:01:41+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 13:04:12 mariadb | 2024-05-02 13:01:41+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 13:04:12 mariadb | #!/bin/bash -xv 13:04:12 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 13:04:12 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 13:04:12 mariadb | # 13:04:12 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 13:04:12 mariadb | # you may not use this file except in compliance with the License. 13:04:12 mariadb | # You may obtain a copy of the License at 13:04:12 mariadb | # 13:04:12 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 13:04:12 mariadb | # 13:04:12 mariadb | # Unless required by applicable law or agreed to in writing, software 13:04:12 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 13:04:12 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13:04:12 mariadb | # See the License for the specific language governing permissions and 13:04:12 mariadb | # limitations under the License. 13:04:12 mariadb | 13:04:12 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 13:04:12 mariadb | do 13:04:12 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 13:04:12 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 13:04:12 mariadb | done 13:04:12 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 13:04:12 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 13:04:12 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 13:04:12 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 13:04:12 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 13:04:12 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 13:04:12 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 13:04:12 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 13:04:12 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 13:04:12 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 13:04:12 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 13:04:12 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 13:04:12 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 13:04:12 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 13:04:12 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 13:04:12 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 13:04:12 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 13:04:12 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 13:04:12 mariadb | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.491048001Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=442.836µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.493874392Z level=info msg="Executing migration" id="create star table" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.494555282Z level=info msg="Migration successfully executed" id="create star table" duration=680.11µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.498786833Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.499606024Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=818.211µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.505911565Z level=info msg="Executing migration" id="create org table v1" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.507338246Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.423451ms 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.511035279Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.512186986Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.150737ms 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.516351556Z level=info msg="Executing migration" id="create org_user table v1" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.517190588Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=838.242µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.520642688Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.521589072Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=945.884µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.524533984Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.525475228Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=940.734µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.528705104Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.529594797Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=889.623µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.533547744Z level=info msg="Executing migration" id="Update org table charset" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.533576374Z level=info msg="Migration successfully executed" id="Update org table charset" duration=29.35µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.536614008Z level=info msg="Executing migration" id="Update org_user table charset" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.536647199Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=32.791µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.539096694Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.539373008Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=275.054µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.54228794Z level=info msg="Executing migration" id="create dashboard table" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.543253304Z level=info msg="Migration successfully executed" id="create dashboard table" duration=965.404µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.548619121Z level=info msg="Executing migration" id="add index dashboard.account_id" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.550067872Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.448391ms 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.553486192Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.554966663Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.479781ms 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.558364432Z level=info msg="Executing migration" id="create dashboard_tag table" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.559187044Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=820.092µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.562455551Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.563455055Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=999.154µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.567591585Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.568496148Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=902.053µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.571896637Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.579956553Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=8.058946ms 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.583250981Z level=info msg="Executing migration" id="create dashboard v2" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.58387519Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=623.569µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.587771826Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.588408545Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=636.739µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.591586401Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.593063183Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.476991ms 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.598041984Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.598666503Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=623.549µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.6026124Z level=info msg="Executing migration" id="drop table dashboard_v1" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.603629345Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.014475ms 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.60675424Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.606861842Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=108.192µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.610094958Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.612099617Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.003969ms 13:04:12 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 13:04:12 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 13:04:12 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 13:04:12 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 13:04:12 mariadb | 13:04:12 mariadb | 2024-05-02 13:01:42+00:00 [Note] [Entrypoint]: Stopping temporary server 13:04:12 mariadb | 2024-05-02 13:01:42 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 13:04:12 mariadb | 2024-05-02 13:01:42 0 [Note] InnoDB: FTS optimize thread exiting. 13:04:12 mariadb | 2024-05-02 13:01:42 0 [Note] InnoDB: Starting shutdown... 13:04:12 mariadb | 2024-05-02 13:01:42 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 13:04:12 mariadb | 2024-05-02 13:01:42 0 [Note] InnoDB: Buffer pool(s) dump completed at 240502 13:01:42 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Note] InnoDB: Shutdown completed; log sequence number 330139; transaction id 298 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Note] mariadbd: Shutdown complete 13:04:12 mariadb | 13:04:12 mariadb | 2024-05-02 13:01:43+00:00 [Note] [Entrypoint]: Temporary server stopped 13:04:12 mariadb | 13:04:12 mariadb | 2024-05-02 13:01:43+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 13:04:12 mariadb | 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Note] InnoDB: Number of transaction pools: 1 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Note] InnoDB: Completed initialization of buffer pool 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Note] InnoDB: 128 rollback segments are active. 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Note] InnoDB: log sequence number 330139; transaction id 299 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Note] Plugin 'FEEDBACK' is disabled. 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Note] Server socket created on IP: '0.0.0.0'. 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Note] Server socket created on IP: '::'. 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Note] mariadbd: ready for connections. 13:04:12 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 13:04:12 mariadb | 2024-05-02 13:01:43 0 [Note] InnoDB: Buffer pool(s) load completed at 240502 13:01:43 13:04:12 mariadb | 2024-05-02 13:01:43 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) 13:04:12 mariadb | 2024-05-02 13:01:44 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) 13:04:12 mariadb | 2024-05-02 13:01:44 11 [Warning] Aborted connection 11 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) 13:04:12 mariadb | 2024-05-02 13:01:45 59 [Warning] Aborted connection 59 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 13:04:12 policy-db-migrator | Waiting for mariadb port 3306... 13:04:12 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 13:04:12 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 13:04:12 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 13:04:12 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 13:04:12 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 13:04:12 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 13:04:12 policy-db-migrator | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! 13:04:12 policy-db-migrator | 321 blocks 13:04:12 policy-db-migrator | Preparing upgrade release version: 0800 13:04:12 policy-db-migrator | Preparing upgrade release version: 0900 13:04:12 policy-db-migrator | Preparing upgrade release version: 1000 13:04:12 policy-db-migrator | Preparing upgrade release version: 1100 13:04:12 policy-db-migrator | Preparing upgrade release version: 1200 13:04:12 policy-db-migrator | Preparing upgrade release version: 1300 13:04:12 policy-db-migrator | Done 13:04:12 policy-db-migrator | name version 13:04:12 policy-db-migrator | policyadmin 0 13:04:12 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 13:04:12 policy-db-migrator | upgrade: 0 -> 1300 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | Waiting for mariadb port 3306... 13:04:12 policy-apex-pdp | mariadb (172.17.0.3:3306) open 13:04:12 policy-apex-pdp | Waiting for kafka port 9092... 13:04:12 policy-apex-pdp | kafka (172.17.0.8:9092) open 13:04:12 policy-apex-pdp | Waiting for pap port 6969... 13:04:12 policy-apex-pdp | pap (172.17.0.9:6969) open 13:04:12 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.216+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.398+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 13:04:12 policy-apex-pdp | allow.auto.create.topics = true 13:04:12 policy-apex-pdp | auto.commit.interval.ms = 5000 13:04:12 policy-apex-pdp | auto.include.jmx.reporter = true 13:04:12 policy-apex-pdp | auto.offset.reset = latest 13:04:12 policy-apex-pdp | bootstrap.servers = [kafka:9092] 13:04:12 policy-apex-pdp | check.crcs = true 13:04:12 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 13:04:12 policy-apex-pdp | client.id = consumer-c1ea3ecb-3042-4296-b7e8-b195f884ad84-1 13:04:12 policy-apex-pdp | client.rack = 13:04:12 policy-apex-pdp | connections.max.idle.ms = 540000 13:04:12 policy-apex-pdp | default.api.timeout.ms = 60000 13:04:12 policy-apex-pdp | enable.auto.commit = true 13:04:12 policy-apex-pdp | exclude.internal.topics = true 13:04:12 policy-apex-pdp | fetch.max.bytes = 52428800 13:04:12 policy-apex-pdp | fetch.max.wait.ms = 500 13:04:12 policy-apex-pdp | fetch.min.bytes = 1 13:04:12 policy-apex-pdp | group.id = c1ea3ecb-3042-4296-b7e8-b195f884ad84 13:04:12 policy-apex-pdp | group.instance.id = null 13:04:12 policy-apex-pdp | heartbeat.interval.ms = 3000 13:04:12 policy-apex-pdp | interceptor.classes = [] 13:04:12 policy-apex-pdp | internal.leave.group.on.close = true 13:04:12 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 13:04:12 policy-apex-pdp | isolation.level = read_uncommitted 13:04:12 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:04:12 policy-apex-pdp | max.partition.fetch.bytes = 1048576 13:04:12 policy-apex-pdp | max.poll.interval.ms = 300000 13:04:12 policy-apex-pdp | max.poll.records = 500 13:04:12 policy-apex-pdp | metadata.max.age.ms = 300000 13:04:12 policy-apex-pdp | metric.reporters = [] 13:04:12 policy-apex-pdp | metrics.num.samples = 2 13:04:12 policy-apex-pdp | metrics.recording.level = INFO 13:04:12 policy-apex-pdp | metrics.sample.window.ms = 30000 13:04:12 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:04:12 kafka | [2024-05-02 13:01:46,969] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:46,969] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:46,969] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:46,969] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:46,969] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 13:04:12 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 13:04:12 policy-apex-pdp | receive.buffer.bytes = 65536 13:04:12 policy-apex-pdp | reconnect.backoff.max.ms = 1000 13:04:12 policy-apex-pdp | reconnect.backoff.ms = 50 13:04:12 policy-apex-pdp | request.timeout.ms = 30000 13:04:12 policy-apex-pdp | retry.backoff.ms = 100 13:04:12 policy-apex-pdp | sasl.client.callback.handler.class = null 13:04:12 policy-apex-pdp | sasl.jaas.config = null 13:04:12 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:04:12 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 13:04:12 policy-apex-pdp | sasl.kerberos.service.name = null 13:04:12 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 13:04:12 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 13:04:12 policy-apex-pdp | sasl.login.callback.handler.class = null 13:04:12 policy-apex-pdp | sasl.login.class = null 13:04:12 policy-apex-pdp | sasl.login.connect.timeout.ms = null 13:04:12 policy-apex-pdp | sasl.login.read.timeout.ms = null 13:04:12 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 13:04:12 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 13:04:12 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 13:04:12 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 13:04:12 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 13:04:12 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 13:04:12 policy-apex-pdp | sasl.mechanism = GSSAPI 13:04:12 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 13:04:12 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 13:04:12 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 13:04:12 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:04:12 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:04:12 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:04:12 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 13:04:12 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 13:04:12 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 13:04:12 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 13:04:12 policy-apex-pdp | security.protocol = PLAINTEXT 13:04:12 policy-apex-pdp | security.providers = null 13:04:12 policy-apex-pdp | send.buffer.bytes = 131072 13:04:12 policy-apex-pdp | session.timeout.ms = 45000 13:04:12 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 13:04:12 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 13:04:12 policy-apex-pdp | ssl.cipher.suites = null 13:04:12 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:04:12 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 13:04:12 policy-apex-pdp | ssl.engine.factory.class = null 13:04:12 policy-apex-pdp | ssl.key.password = null 13:04:12 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 13:04:12 policy-apex-pdp | ssl.keystore.certificate.chain = null 13:04:12 policy-apex-pdp | ssl.keystore.key = null 13:04:12 policy-apex-pdp | ssl.keystore.location = null 13:04:12 policy-apex-pdp | ssl.keystore.password = null 13:04:12 policy-apex-pdp | ssl.keystore.type = JKS 13:04:12 policy-apex-pdp | ssl.protocol = TLSv1.3 13:04:12 policy-apex-pdp | ssl.provider = null 13:04:12 policy-apex-pdp | ssl.secure.random.implementation = null 13:04:12 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 13:04:12 policy-apex-pdp | ssl.truststore.certificates = null 13:04:12 policy-apex-pdp | ssl.truststore.location = null 13:04:12 policy-apex-pdp | ssl.truststore.password = null 13:04:12 policy-apex-pdp | ssl.truststore.type = JKS 13:04:12 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:04:12 policy-apex-pdp | 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.557+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.557+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.557+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714654936556 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.559+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-c1ea3ecb-3042-4296-b7e8-b195f884ad84-1, groupId=c1ea3ecb-3042-4296-b7e8-b195f884ad84] Subscribed to topic(s): policy-pdp-pap 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.571+00:00|INFO|ServiceManager|main] service manager starting 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.571+00:00|INFO|ServiceManager|main] service manager starting topics 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.572+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c1ea3ecb-3042-4296-b7e8-b195f884ad84, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.591+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 13:04:12 policy-apex-pdp | allow.auto.create.topics = true 13:04:12 policy-apex-pdp | auto.commit.interval.ms = 5000 13:04:12 policy-apex-pdp | auto.include.jmx.reporter = true 13:04:12 kafka | [2024-05-02 13:01:46,970] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:46,970] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:46,970] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:46,970] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:46,970] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:46,970] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:46,970] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:46,970] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:46,970] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:46,970] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:46,972] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@66746f57 (org.apache.zookeeper.ZooKeeper) 13:04:12 kafka | [2024-05-02 13:01:46,977] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 13:04:12 kafka | [2024-05-02 13:01:46,984] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 13:04:12 kafka | [2024-05-02 13:01:46,986] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 13:04:12 kafka | [2024-05-02 13:01:46,992] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 13:04:12 kafka | [2024-05-02 13:01:47,003] INFO Socket connection established, initiating session, client: /172.17.0.8:39752, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 13:04:12 kafka | [2024-05-02 13:01:47,013] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000003d1650001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 13:04:12 kafka | [2024-05-02 13:01:47,020] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 13:04:12 kafka | [2024-05-02 13:01:47,395] INFO Cluster ID = 241kjIVNQKeIb2Rrsc8nPA (kafka.server.KafkaServer) 13:04:12 kafka | [2024-05-02 13:01:47,399] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 13:04:12 kafka | [2024-05-02 13:01:47,451] INFO KafkaConfig values: 13:04:12 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 13:04:12 kafka | alter.config.policy.class.name = null 13:04:12 kafka | alter.log.dirs.replication.quota.window.num = 11 13:04:12 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 13:04:12 kafka | authorizer.class.name = 13:04:12 kafka | auto.create.topics.enable = true 13:04:12 kafka | auto.include.jmx.reporter = true 13:04:12 kafka | auto.leader.rebalance.enable = true 13:04:12 kafka | background.threads = 10 13:04:12 kafka | broker.heartbeat.interval.ms = 2000 13:04:12 kafka | broker.id = 1 13:04:12 kafka | broker.id.generation.enable = true 13:04:12 kafka | broker.rack = null 13:04:12 kafka | broker.session.timeout.ms = 9000 13:04:12 kafka | client.quota.callback.class = null 13:04:12 kafka | compression.type = producer 13:04:12 kafka | connection.failed.authentication.delay.ms = 100 13:04:12 kafka | connections.max.idle.ms = 600000 13:04:12 kafka | connections.max.reauth.ms = 0 13:04:12 kafka | control.plane.listener.name = null 13:04:12 kafka | controlled.shutdown.enable = true 13:04:12 kafka | controlled.shutdown.max.retries = 3 13:04:12 kafka | controlled.shutdown.retry.backoff.ms = 5000 13:04:12 kafka | controller.listener.names = null 13:04:12 kafka | controller.quorum.append.linger.ms = 25 13:04:12 kafka | controller.quorum.election.backoff.max.ms = 1000 13:04:12 kafka | controller.quorum.election.timeout.ms = 1000 13:04:12 policy-apex-pdp | auto.offset.reset = latest 13:04:12 policy-apex-pdp | bootstrap.servers = [kafka:9092] 13:04:12 policy-apex-pdp | check.crcs = true 13:04:12 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 13:04:12 policy-apex-pdp | client.id = consumer-c1ea3ecb-3042-4296-b7e8-b195f884ad84-2 13:04:12 policy-apex-pdp | client.rack = 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 13:04:12 policy-apex-pdp | connections.max.idle.ms = 540000 13:04:12 kafka | controller.quorum.fetch.timeout.ms = 2000 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.61646371Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 13:04:12 policy-api | Waiting for mariadb port 3306... 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | default.api.timeout.ms = 60000 13:04:12 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 13:04:12 kafka | controller.quorum.request.timeout.ms = 2000 13:04:12 policy-pap | Waiting for mariadb port 3306... 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.618370688Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.905987ms 13:04:12 zookeeper | ===> User 13:04:12 policy-api | mariadb (172.17.0.3:3306) open 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 13:04:12 prometheus | ts=2024-05-02T13:01:42.177Z caller=main.go:573 level=info msg="No time or size retention was set so using the default time retention" duration=15d 13:04:12 policy-apex-pdp | enable.auto.commit = true 13:04:12 kafka | controller.quorum.retry.backoff.ms = 20 13:04:12 kafka | controller.quorum.voters = [] 13:04:12 policy-pap | Waiting for kafka port 9092... 13:04:12 policy-pap | mariadb (172.17.0.3:3306) open 13:04:12 policy-api | Waiting for policy-db-migrator port 6824... 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.621683345Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 13:04:12 policy-db-migrator | -------------- 13:04:12 prometheus | ts=2024-05-02T13:01:42.177Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.2, branch=HEAD, revision=b4c0ab52c3e9b940ab803581ddae9b3d9a452337)" 13:04:12 policy-apex-pdp | exclude.internal.topics = true 13:04:12 kafka | controller.quota.window.num = 11 13:04:12 kafka | controller.quota.window.size.seconds = 1 13:04:12 policy-pap | kafka (172.17.0.8:9092) open 13:04:12 policy-pap | Waiting for api port 6969... 13:04:12 policy-api | policy-db-migrator (172.17.0.6:6824) open 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.623665194Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.981199ms 13:04:12 policy-db-migrator | 13:04:12 prometheus | ts=2024-05-02T13:01:42.177Z caller=main.go:622 level=info build_context="(go=go1.22.2, platform=linux/amd64, user=root@b63f02a423d9, date=20240410-14:05:54, tags=netgo,builtinassets,stringlabels)" 13:04:12 policy-apex-pdp | fetch.max.bytes = 52428800 13:04:12 kafka | controller.socket.timeout.ms = 30000 13:04:12 simulator | overriding logback.xml 13:04:12 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 13:04:12 policy-pap | api (172.17.0.7:6969) open 13:04:12 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.626585856Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 13:04:12 policy-db-migrator | 13:04:12 prometheus | ts=2024-05-02T13:01:42.177Z caller=main.go:623 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 13:04:12 prometheus | ts=2024-05-02T13:01:42.177Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 13:04:12 kafka | create.topic.policy.class.name = null 13:04:12 simulator | 2024-05-02 13:01:36,447 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 13:04:12 zookeeper | ===> Configuring ... 13:04:12 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 13:04:12 policy-api | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.62755649Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=970.314µs 13:04:12 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 13:04:12 prometheus | ts=2024-05-02T13:01:42.177Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 13:04:12 prometheus | ts=2024-05-02T13:01:42.186Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 13:04:12 kafka | default.replication.factor = 1 13:04:12 simulator | 2024-05-02 13:01:36,513 INFO org.onap.policy.models.simulators starting 13:04:12 zookeeper | ===> Running preflight checks ... 13:04:12 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 13:04:12 policy-api | . ____ _ __ _ _ 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.631666609Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 13:04:12 policy-db-migrator | -------------- 13:04:12 prometheus | ts=2024-05-02T13:01:42.186Z caller=main.go:1129 level=info msg="Starting TSDB ..." 13:04:12 prometheus | ts=2024-05-02T13:01:42.192Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 13:04:12 kafka | delegation.token.expiry.check.interval.ms = 3600000 13:04:12 simulator | 2024-05-02 13:01:36,514 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 13:04:12 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 13:04:12 policy-pap | 13:04:12 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.633646728Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.977399ms 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:04:12 policy-apex-pdp | fetch.max.wait.ms = 500 13:04:12 prometheus | ts=2024-05-02T13:01:42.192Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 13:04:12 kafka | delegation.token.expiry.time.ms = 86400000 13:04:12 simulator | 2024-05-02 13:01:36,690 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 13:04:12 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 13:04:12 policy-pap | . ____ _ __ _ _ 13:04:12 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.637078737Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | fetch.min.bytes = 1 13:04:12 prometheus | ts=2024-05-02T13:01:42.196Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 13:04:12 kafka | delegation.token.master.key = null 13:04:12 simulator | 2024-05-02 13:01:36,691 INFO org.onap.policy.models.simulators starting A&AI simulator 13:04:12 zookeeper | ===> Launching ... 13:04:12 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 13:04:12 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.638013121Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=933.994µs 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | group.id = c1ea3ecb-3042-4296-b7e8-b195f884ad84 13:04:12 prometheus | ts=2024-05-02T13:01:42.196Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=5.08µs 13:04:12 kafka | delegation.token.max.lifetime.ms = 604800000 13:04:12 simulator | 2024-05-02 13:01:36,800 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 13:04:12 zookeeper | ===> Launching zookeeper ... 13:04:12 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 13:04:12 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.641184827Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | group.instance.id = null 13:04:12 prometheus | ts=2024-05-02T13:01:42.196Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 13:04:12 kafka | delegation.token.secret.key = null 13:04:12 simulator | 2024-05-02 13:01:36,811 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:04:12 zookeeper | [2024-05-02 13:01:43,659] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:04:12 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 13:04:12 policy-api | =========|_|==============|___/=/_/_/_/ 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.64211179Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=929.353µs 13:04:12 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 13:04:12 policy-apex-pdp | heartbeat.interval.ms = 3000 13:04:12 prometheus | ts=2024-05-02T13:01:42.197Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 13:04:12 kafka | delete.records.purgatory.purge.interval.requests = 1 13:04:12 simulator | 2024-05-02 13:01:36,814 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:04:12 zookeeper | [2024-05-02 13:01:43,666] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:04:12 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 13:04:12 policy-api | :: Spring Boot :: (v3.1.10) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.645930145Z level=info msg="Executing migration" id="Update dashboard table charset" 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | interceptor.classes = [] 13:04:12 prometheus | ts=2024-05-02T13:01:42.197Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=142.602µs wal_replay_duration=537.418µs wbl_replay_duration=230ns total_replay_duration=771.741µs 13:04:12 kafka | delete.topic.enable = true 13:04:12 simulator | 2024-05-02 13:01:36,821 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 13:04:12 zookeeper | [2024-05-02 13:01:43,666] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:04:12 policy-pap | =========|_|==============|___/=/_/_/_/ 13:04:12 policy-api | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.645959015Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=29.64µs 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:04:12 policy-apex-pdp | internal.leave.group.on.close = true 13:04:12 prometheus | ts=2024-05-02T13:01:42.200Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 13:04:12 kafka | early.start.listeners = null 13:04:12 simulator | 2024-05-02 13:01:36,880 INFO Session workerName=node0 13:04:12 zookeeper | [2024-05-02 13:01:43,666] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:04:12 policy-pap | :: Spring Boot :: (v3.1.10) 13:04:12 policy-api | [2024-05-02T13:01:52.303+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.649091381Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 13:04:12 prometheus | ts=2024-05-02T13:01:42.200Z caller=main.go:1153 level=info msg="TSDB started" 13:04:12 kafka | fetch.max.bytes = 57671680 13:04:12 simulator | 2024-05-02 13:01:37,454 INFO Using GSON for REST calls 13:04:12 zookeeper | [2024-05-02 13:01:43,666] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:04:12 policy-pap | 13:04:12 policy-api | [2024-05-02T13:01:52.360+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.11 with PID 22 (/app/api.jar started by policy in /opt/app/policy/api/bin) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.649120441Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=29.76µs 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | isolation.level = read_uncommitted 13:04:12 prometheus | ts=2024-05-02T13:01:42.200Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 13:04:12 kafka | fetch.purgatory.purge.interval.requests = 1000 13:04:12 simulator | 2024-05-02 13:01:37,528 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} 13:04:12 zookeeper | [2024-05-02 13:01:43,668] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 13:04:12 policy-pap | [2024-05-02T13:02:05.470+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 13:04:12 policy-api | [2024-05-02T13:01:52.362+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.652158635Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:04:12 prometheus | ts=2024-05-02T13:01:42.201Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=923.534µs db_storage=1.9µs remote_storage=1.98µs web_handler=740ns query_engine=1.71µs scrape=219.393µs scrape_sd=132.312µs notify=28.3µs notify_sd=9.45µs rules=2.47µs tracing=5.25µs 13:04:12 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 13:04:12 simulator | 2024-05-02 13:01:37,535 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 13:04:12 zookeeper | [2024-05-02 13:01:43,668] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 13:04:12 policy-pap | [2024-05-02T13:02:05.551+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.11 with PID 31 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 13:04:12 policy-api | [2024-05-02T13:01:54.313+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.65530564Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.145865ms 13:04:12 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 13:04:12 policy-apex-pdp | max.partition.fetch.bytes = 1048576 13:04:12 prometheus | ts=2024-05-02T13:01:42.201Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 13:04:12 kafka | group.consumer.heartbeat.interval.ms = 5000 13:04:12 simulator | 2024-05-02 13:01:37,544 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1537ms 13:04:12 zookeeper | [2024-05-02 13:01:43,668] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 13:04:12 policy-pap | [2024-05-02T13:02:05.553+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.660662598Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | max.poll.interval.ms = 300000 13:04:12 prometheus | ts=2024-05-02T13:01:42.201Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 13:04:12 kafka | group.consumer.max.heartbeat.interval.ms = 15000 13:04:12 simulator | 2024-05-02 13:01:37,545 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4269 ms. 13:04:12 zookeeper | [2024-05-02 13:01:43,668] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 13:04:12 policy-pap | [2024-05-02T13:02:07.593+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.662717467Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.054269ms 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:04:12 policy-apex-pdp | max.poll.records = 500 13:04:12 kafka | group.consumer.max.session.timeout.ms = 60000 13:04:12 simulator | 2024-05-02 13:01:37,552 INFO org.onap.policy.models.simulators starting SDNC simulator 13:04:12 zookeeper | [2024-05-02 13:01:43,669] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 13:04:12 policy-pap | [2024-05-02T13:02:07.695+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 93 ms. Found 7 JPA repository interfaces. 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.665961594Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | metadata.max.age.ms = 300000 13:04:12 kafka | group.consumer.max.size = 2147483647 13:04:12 simulator | 2024-05-02 13:01:37,556 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 13:04:12 zookeeper | [2024-05-02 13:01:43,669] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:04:12 policy-pap | [2024-05-02T13:02:08.167+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 13:04:12 policy-api | [2024-05-02T13:01:54.412+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 86 ms. Found 6 JPA repository interfaces. 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.668273067Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.308123ms 13:04:12 policy-db-migrator | 13:04:12 kafka | group.consumer.min.heartbeat.interval.ms = 5000 13:04:12 kafka | group.consumer.min.session.timeout.ms = 45000 13:04:12 simulator | 2024-05-02 13:01:37,556 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:04:12 zookeeper | [2024-05-02 13:01:43,670] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:04:12 policy-pap | [2024-05-02T13:02:08.168+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 13:04:12 policy-api | [2024-05-02T13:01:54.892+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.672492968Z level=info msg="Executing migration" id="Add column uid in dashboard" 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | metric.reporters = [] 13:04:12 kafka | group.consumer.session.timeout.ms = 45000 13:04:12 simulator | 2024-05-02 13:01:37,557 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:04:12 zookeeper | [2024-05-02 13:01:43,670] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:04:12 policy-pap | [2024-05-02T13:02:08.810+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 13:04:12 policy-api | [2024-05-02T13:01:54.893+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.674578258Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.08481ms 13:04:12 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 13:04:12 policy-apex-pdp | metrics.num.samples = 2 13:04:12 kafka | group.coordinator.new.enable = false 13:04:12 simulator | 2024-05-02 13:01:37,558 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 13:04:12 zookeeper | [2024-05-02 13:01:43,670] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:04:12 policy-pap | [2024-05-02T13:02:08.820+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 13:04:12 policy-api | [2024-05-02T13:01:55.544+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.677663803Z level=info msg="Executing migration" id="Update uid column values in dashboard" 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | metrics.recording.level = INFO 13:04:12 kafka | group.coordinator.threads = 1 13:04:12 simulator | 2024-05-02 13:01:37,565 INFO Session workerName=node0 13:04:12 zookeeper | [2024-05-02 13:01:43,670] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:04:12 policy-pap | [2024-05-02T13:02:08.822+00:00|INFO|StandardService|main] Starting service [Tomcat] 13:04:12 policy-api | [2024-05-02T13:01:55.554+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.677968407Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=304.034µs 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:04:12 policy-apex-pdp | metrics.sample.window.ms = 30000 13:04:12 kafka | group.initial.rebalance.delay.ms = 3000 13:04:12 simulator | 2024-05-02 13:01:37,620 INFO Using GSON for REST calls 13:04:12 zookeeper | [2024-05-02 13:01:43,670] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 13:04:12 policy-pap | [2024-05-02T13:02:08.822+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 13:04:12 policy-api | [2024-05-02T13:01:55.556+00:00|INFO|StandardService|main] Starting service [Tomcat] 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.682181258Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 13:04:12 kafka | group.max.session.timeout.ms = 1800000 13:04:12 simulator | 2024-05-02 13:01:37,633 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} 13:04:12 zookeeper | [2024-05-02 13:01:43,680] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3246fb96 (org.apache.zookeeper.server.ServerMetrics) 13:04:12 policy-pap | [2024-05-02T13:02:08.923+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 13:04:12 policy-api | [2024-05-02T13:01:55.556+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.683207013Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.025855ms 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | receive.buffer.bytes = 65536 13:04:12 kafka | group.max.size = 2147483647 13:04:12 simulator | 2024-05-02 13:01:37,635 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 13:04:12 zookeeper | [2024-05-02 13:01:43,683] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 13:04:12 zookeeper | [2024-05-02 13:01:43,683] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 13:04:12 policy-pap | [2024-05-02T13:02:08.924+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3291 ms 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.686268767Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | reconnect.backoff.max.ms = 1000 13:04:12 kafka | group.min.session.timeout.ms = 6000 13:04:12 simulator | 2024-05-02 13:01:37,636 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @1628ms 13:04:12 policy-api | [2024-05-02T13:01:55.653+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 13:04:12 zookeeper | [2024-05-02 13:01:43,685] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 13:04:12 zookeeper | [2024-05-02 13:01:43,694] INFO (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.687188Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=919.963µs 13:04:12 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 13:04:12 policy-apex-pdp | reconnect.backoff.ms = 50 13:04:12 kafka | initial.broker.registration.timeout.ms = 60000 13:04:12 policy-api | [2024-05-02T13:01:55.654+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3220 ms 13:04:12 simulator | 2024-05-02 13:01:37,636 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4921 ms. 13:04:12 policy-pap | [2024-05-02T13:02:09.341+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 13:04:12 zookeeper | [2024-05-02 13:01:43,694] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.690109332Z level=info msg="Executing migration" id="Update dashboard title length" 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | request.timeout.ms = 30000 13:04:12 kafka | inter.broker.listener.name = PLAINTEXT 13:04:12 policy-api | [2024-05-02T13:01:56.095+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 13:04:12 simulator | 2024-05-02 13:01:37,637 INFO org.onap.policy.models.simulators starting SO simulator 13:04:12 policy-pap | [2024-05-02T13:02:09.401+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final 13:04:12 zookeeper | [2024-05-02 13:01:43,694] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.690137453Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=28.731µs 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 13:04:12 policy-apex-pdp | retry.backoff.ms = 100 13:04:12 kafka | inter.broker.protocol.version = 3.6-IV2 13:04:12 policy-api | [2024-05-02T13:01:56.168+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final 13:04:12 simulator | 2024-05-02 13:01:37,640 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 13:04:12 policy-pap | [2024-05-02T13:02:09.748+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 13:04:12 zookeeper | [2024-05-02 13:01:43,694] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.693907677Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | sasl.client.callback.handler.class = null 13:04:12 kafka | kafka.metrics.polling.interval.secs = 10 13:04:12 policy-api | [2024-05-02T13:01:56.213+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 13:04:12 simulator | 2024-05-02 13:01:37,640 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:04:12 policy-pap | [2024-05-02T13:02:09.846+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@60cf62ad 13:04:12 zookeeper | [2024-05-02 13:01:43,694] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.695743244Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.834367ms 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | sasl.jaas.config = null 13:04:12 kafka | kafka.metrics.reporters = [] 13:04:12 policy-api | [2024-05-02T13:01:56.495+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 13:04:12 simulator | 2024-05-02 13:01:37,641 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:04:12 policy-pap | [2024-05-02T13:02:09.849+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 13:04:12 zookeeper | [2024-05-02 13:01:43,694] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.701403525Z level=info msg="Executing migration" id="create dashboard_provisioning" 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:04:12 kafka | leader.imbalance.check.interval.seconds = 300 13:04:12 policy-api | [2024-05-02T13:01:56.524+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 13:04:12 simulator | 2024-05-02 13:01:37,642 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 13:04:12 policy-pap | [2024-05-02T13:02:09.883+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect 13:04:12 zookeeper | [2024-05-02 13:01:43,694] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.702604613Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.200627ms 13:04:12 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 13:04:12 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 13:04:12 kafka | leader.imbalance.per.broker.percentage = 10 13:04:12 policy-api | [2024-05-02T13:01:56.614+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@312b34e3 13:04:12 simulator | 2024-05-02 13:01:37,644 INFO Session workerName=node0 13:04:12 policy-pap | [2024-05-02T13:02:11.481+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] 13:04:12 zookeeper | [2024-05-02 13:01:43,694] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.706406817Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | sasl.kerberos.service.name = null 13:04:12 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 13:04:12 policy-api | [2024-05-02T13:01:56.616+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 13:04:12 simulator | 2024-05-02 13:01:37,695 INFO Using GSON for REST calls 13:04:12 policy-pap | [2024-05-02T13:02:11.493+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 13:04:12 zookeeper | [2024-05-02 13:01:43,694] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.71213877Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.731133ms 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:04:12 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 13:04:12 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 13:04:12 policy-api | [2024-05-02T13:01:58.658+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 13:04:12 simulator | 2024-05-02 13:01:37,706 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} 13:04:12 policy-pap | [2024-05-02T13:02:12.007+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 13:04:12 zookeeper | [2024-05-02 13:01:43,694] INFO (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.71627533Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 13:04:12 kafka | log.cleaner.backoff.ms = 15000 13:04:12 policy-api | [2024-05-02T13:01:58.662+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 13:04:12 simulator | 2024-05-02 13:01:37,708 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 13:04:12 policy-pap | [2024-05-02T13:02:12.430+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.717097062Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=821.062µs 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | sasl.login.callback.handler.class = null 13:04:12 kafka | log.cleaner.dedupe.buffer.size = 134217728 13:04:12 policy-api | [2024-05-02T13:01:59.749+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 13:04:12 simulator | 2024-05-02 13:01:37,708 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @1701ms 13:04:12 policy-pap | [2024-05-02T13:02:12.563+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO Server environment:host.name=3d9e3ebcb179 (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.720329218Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | sasl.login.class = null 13:04:12 kafka | log.cleaner.delete.retention.ms = 86400000 13:04:12 policy-api | [2024-05-02T13:02:00.644+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 13:04:12 simulator | 2024-05-02 13:01:37,708 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4933 ms. 13:04:12 policy-pap | [2024-05-02T13:02:12.852+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO Server environment:java.version=11.0.22 (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.721260732Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=931.494µs 13:04:12 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 13:04:12 policy-apex-pdp | sasl.login.connect.timeout.ms = null 13:04:12 kafka | log.cleaner.enable = true 13:04:12 policy-api | [2024-05-02T13:02:01.863+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 13:04:12 simulator | 2024-05-02 13:01:37,709 INFO org.onap.policy.models.simulators starting VFC simulator 13:04:12 policy-pap | allow.auto.create.topics = true 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.724657551Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 13:04:12 policy-apex-pdp | sasl.login.read.timeout.ms = null 13:04:12 kafka | log.cleaner.io.buffer.load.factor = 0.9 13:04:12 policy-api | [2024-05-02T13:02:02.088+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@4fa650e1, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@54d8c998, org.springframework.security.web.context.SecurityContextHolderFilter@31f5829e, org.springframework.security.web.header.HeaderWriterFilter@2a384b46, org.springframework.security.web.authentication.logout.LogoutFilter@203f1447, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@1c277413, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@72e6e93, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@32c29f7b, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@5da1f9b9, org.springframework.security.web.access.ExceptionTranslationFilter@4743220d, org.springframework.security.web.access.intercept.AuthorizationFilter@13a34a70] 13:04:12 simulator | 2024-05-02 13:01:37,711 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 13:04:12 policy-pap | auto.commit.interval.ms = 5000 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.726078901Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.42153ms 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 13:04:12 kafka | log.cleaner.io.buffer.size = 524288 13:04:12 policy-api | [2024-05-02T13:02:02.990+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 13:04:12 simulator | 2024-05-02 13:01:37,711 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:04:12 policy-pap | auto.include.jmx.reporter = true 13:04:12 policy-pap | auto.offset.reset = latest 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:04:12 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 13:04:12 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 13:04:12 simulator | 2024-05-02 13:01:37,712 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.73085703Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 13:04:12 policy-pap | bootstrap.servers = [kafka:9092] 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 13:04:12 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 13:04:12 simulator | 2024-05-02 13:01:37,712 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.731283406Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=426.296µs 13:04:12 policy-pap | check.crcs = true 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 13:04:12 kafka | log.cleaner.min.cleanable.ratio = 0.5 13:04:12 simulator | 2024-05-02 13:01:37,721 INFO Session workerName=node0 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.734243619Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 13:04:12 policy-pap | client.dns.lookup = use_all_dns_ips 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 13:04:12 kafka | log.cleaner.min.compaction.lag.ms = 0 13:04:12 simulator | 2024-05-02 13:01:37,764 INFO Using GSON for REST calls 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.734954419Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=711.71µs 13:04:12 policy-pap | client.id = consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-1 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 13:04:12 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 13:04:12 kafka | log.cleaner.threads = 1 13:04:12 simulator | 2024-05-02 13:01:37,772 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.737931062Z level=info msg="Executing migration" id="Add check_sum column" 13:04:12 policy-pap | client.rack = 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | sasl.mechanism = GSSAPI 13:04:12 kafka | log.cleanup.policy = [delete] 13:04:12 simulator | 2024-05-02 13:01:37,774 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 13:04:12 policy-pap | connections.max.idle.ms = 540000 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 13:04:12 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 13:04:12 kafka | log.dir = /tmp/kafka-logs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.740229695Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.297853ms 13:04:12 policy-pap | default.api.timeout.ms = 60000 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 13:04:12 kafka | log.dirs = /var/lib/kafka/data 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.744810781Z level=info msg="Executing migration" id="Add index for dashboard_title" 13:04:12 simulator | 2024-05-02 13:01:37,774 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @1766ms 13:04:12 policy-api | [2024-05-02T13:02:03.092+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 13:04:12 policy-pap | enable.auto.commit = true 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 13:04:12 kafka | log.flush.interval.messages = 9223372036854775807 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.745771965Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=961.474µs 13:04:12 policy-api | [2024-05-02T13:02:03.121+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 13:04:12 simulator | 2024-05-02 13:01:37,774 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4938 ms. 13:04:12 policy-pap | exclude.internal.topics = true 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:04:12 kafka | log.flush.interval.ms = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.797839926Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 13:04:12 policy-api | [2024-05-02T13:02:03.141+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.518 seconds (process running for 12.141) 13:04:12 simulator | 2024-05-02 13:01:37,775 INFO org.onap.policy.models.simulators started 13:04:12 policy-pap | fetch.max.bytes = 52428800 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 13:04:12 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:04:12 kafka | log.flush.offset.checkpoint.interval.ms = 60000 13:04:12 policy-api | [2024-05-02T13:02:18.886+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.798400814Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=564.598µs 13:04:12 policy-pap | fetch.max.wait.ms = 500 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:04:12 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 13:04:12 policy-api | [2024-05-02T13:02:18.886+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.801633261Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 13:04:12 policy-pap | fetch.min.bytes = 1 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:04:12 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 13:04:12 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 13:04:12 policy-api | [2024-05-02T13:02:18.887+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.802071487Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=437.266µs 13:04:12 policy-pap | group.id = ad46f4cb-cb07-4411-8d0e-379eef1836ce 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 13:04:12 kafka | log.index.interval.bytes = 4096 13:04:12 policy-api | [2024-05-02T13:02:19.199+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.806515371Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 13:04:12 policy-pap | group.instance.id = null 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 13:04:12 kafka | log.index.size.max.bytes = 10485760 13:04:12 policy-api | [] 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.808427959Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.914188ms 13:04:12 policy-pap | heartbeat.interval.ms = 3000 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 policy-db-migrator | 13:04:12 kafka | log.local.retention.bytes = -2 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.813255879Z level=info msg="Executing migration" id="Add isPublic for dashboard" 13:04:12 policy-pap | interceptor.classes = [] 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 13:04:12 kafka | log.local.retention.ms = -2 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.815971308Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.72099ms 13:04:12 policy-pap | internal.leave.group.on.close = true 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 policy-apex-pdp | security.protocol = PLAINTEXT 13:04:12 policy-db-migrator | > upgrade 0450-pdpgroup.sql 13:04:12 kafka | log.message.downconversion.enable = true 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.819251715Z level=info msg="Executing migration" id="create data_source table" 13:04:12 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 policy-apex-pdp | security.providers = null 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | log.message.format.version = 3.0-IV1 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.820346691Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.094576ms 13:04:12 policy-pap | isolation.level = read_uncommitted 13:04:12 zookeeper | [2024-05-02 13:01:43,696] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 policy-apex-pdp | send.buffer.bytes = 131072 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 13:04:12 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.823871772Z level=info msg="Executing migration" id="add index data_source.account_id" 13:04:12 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:04:12 zookeeper | [2024-05-02 13:01:43,697] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 13:04:12 policy-apex-pdp | session.timeout.ms = 45000 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.824929357Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.057375ms 13:04:12 policy-pap | max.partition.fetch.bytes = 1048576 13:04:12 zookeeper | [2024-05-02 13:01:43,698] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 13:04:12 policy-db-migrator | 13:04:12 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.829115187Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 13:04:12 policy-pap | max.poll.interval.ms = 300000 13:04:12 zookeeper | [2024-05-02 13:01:43,698] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 13:04:12 policy-db-migrator | 13:04:12 kafka | log.message.timestamp.type = CreateTime 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.830282414Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.167287ms 13:04:12 policy-pap | max.poll.records = 500 13:04:12 zookeeper | [2024-05-02 13:01:43,699] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 13:04:12 policy-apex-pdp | ssl.cipher.suites = null 13:04:12 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 13:04:12 kafka | log.preallocate = false 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.83347683Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 13:04:12 policy-pap | metadata.max.age.ms = 300000 13:04:12 zookeeper | [2024-05-02 13:01:43,699] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 13:04:12 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.834391083Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=911.703µs 13:04:12 kafka | log.retention.bytes = -1 13:04:12 policy-pap | metric.reporters = [] 13:04:12 zookeeper | [2024-05-02 13:01:43,700] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 13:04:12 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.837817423Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 13:04:12 kafka | log.retention.check.interval.ms = 300000 13:04:12 policy-pap | metrics.num.samples = 2 13:04:12 zookeeper | [2024-05-02 13:01:43,700] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 13:04:12 policy-apex-pdp | ssl.engine.factory.class = null 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.839185233Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.36741ms 13:04:12 kafka | log.retention.hours = 168 13:04:12 policy-pap | metrics.recording.level = INFO 13:04:12 zookeeper | [2024-05-02 13:01:43,700] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 13:04:12 policy-apex-pdp | ssl.key.password = null 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.843748748Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 13:04:12 kafka | log.retention.minutes = null 13:04:12 policy-pap | metrics.sample.window.ms = 30000 13:04:12 zookeeper | [2024-05-02 13:01:43,700] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 13:04:12 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.850866531Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=7.098873ms 13:04:12 kafka | log.retention.ms = null 13:04:12 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 13:04:12 policy-apex-pdp | ssl.keystore.certificate.chain = null 13:04:12 policy-db-migrator | > upgrade 0470-pdp.sql 13:04:12 zookeeper | [2024-05-02 13:01:43,700] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 13:04:12 zookeeper | [2024-05-02 13:01:43,700] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 13:04:12 kafka | log.roll.hours = 168 13:04:12 policy-pap | receive.buffer.bytes = 65536 13:04:12 policy-apex-pdp | ssl.keystore.key = null 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.854605855Z level=info msg="Executing migration" id="create data_source table v2" 13:04:12 zookeeper | [2024-05-02 13:01:43,702] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 kafka | log.roll.jitter.hours = 0 13:04:12 policy-pap | reconnect.backoff.max.ms = 1000 13:04:12 policy-apex-pdp | ssl.keystore.location = null 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.855394136Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=788.951µs 13:04:12 zookeeper | [2024-05-02 13:01:43,702] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 kafka | log.roll.jitter.ms = null 13:04:12 policy-pap | reconnect.backoff.ms = 50 13:04:12 policy-apex-pdp | ssl.keystore.password = null 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.860364298Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 13:04:12 zookeeper | [2024-05-02 13:01:43,703] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 13:04:12 kafka | log.roll.ms = null 13:04:12 policy-pap | request.timeout.ms = 30000 13:04:12 policy-apex-pdp | ssl.keystore.type = JKS 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.861429683Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.062755ms 13:04:12 zookeeper | [2024-05-02 13:01:43,703] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 13:04:12 kafka | log.segment.bytes = 1073741824 13:04:12 policy-pap | retry.backoff.ms = 100 13:04:12 policy-apex-pdp | ssl.protocol = TLSv1.3 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.866398685Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 13:04:12 zookeeper | [2024-05-02 13:01:43,703] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 kafka | log.segment.delete.delay.ms = 60000 13:04:12 policy-pap | sasl.client.callback.handler.class = null 13:04:12 policy-apex-pdp | ssl.provider = null 13:04:12 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.867973628Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.573043ms 13:04:12 zookeeper | [2024-05-02 13:01:43,722] INFO Logging initialized @551ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 13:04:12 kafka | max.connection.creation.rate = 2147483647 13:04:12 policy-pap | sasl.jaas.config = null 13:04:12 policy-apex-pdp | ssl.secure.random.implementation = null 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.871693772Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 13:04:12 zookeeper | [2024-05-02 13:01:43,803] WARN o.e.j.s.ServletContextHandler@311bf055{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 13:04:12 kafka | max.connections = 2147483647 13:04:12 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:04:12 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.872487543Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=793.292µs 13:04:12 zookeeper | [2024-05-02 13:01:43,803] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 13:04:12 kafka | max.connections.per.ip = 2147483647 13:04:12 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 13:04:12 policy-apex-pdp | ssl.truststore.certificates = null 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.876774255Z level=info msg="Executing migration" id="Add column with_credentials" 13:04:12 zookeeper | [2024-05-02 13:01:43,825] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 11.0.22+7-LTS (org.eclipse.jetty.server.Server) 13:04:12 kafka | max.connections.per.ip.overrides = 13:04:12 policy-pap | sasl.kerberos.service.name = null 13:04:12 policy-apex-pdp | ssl.truststore.location = null 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.880239675Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.46369ms 13:04:12 zookeeper | [2024-05-02 13:01:43,856] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 13:04:12 kafka | max.incremental.fetch.session.cache.slots = 1000 13:04:12 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 13:04:12 policy-apex-pdp | ssl.truststore.password = null 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.884238032Z level=info msg="Executing migration" id="Add secure json data column" 13:04:12 zookeeper | [2024-05-02 13:01:43,856] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 13:04:12 kafka | message.max.bytes = 1048588 13:04:12 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 13:04:12 policy-apex-pdp | ssl.truststore.type = JKS 13:04:12 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.888151999Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=3.909397ms 13:04:12 zookeeper | [2024-05-02 13:01:43,858] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) 13:04:12 kafka | metadata.log.dir = null 13:04:12 policy-pap | sasl.login.callback.handler.class = null 13:04:12 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.893011549Z level=info msg="Executing migration" id="Update data_source table charset" 13:04:12 zookeeper | [2024-05-02 13:01:43,863] WARN ServletContext@o.e.j.s.ServletContextHandler@311bf055{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 13:04:12 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 13:04:12 policy-pap | sasl.login.class = null 13:04:12 policy-apex-pdp | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.89308313Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=75.891µs 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:04:12 zookeeper | [2024-05-02 13:01:43,873] INFO Started o.e.j.s.ServletContextHandler@311bf055{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 13:04:12 kafka | metadata.log.max.snapshot.interval.ms = 3600000 13:04:12 policy-pap | sasl.login.connect.timeout.ms = null 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.600+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.898210794Z level=info msg="Executing migration" id="Update initial version to 1" 13:04:12 policy-db-migrator | -------------- 13:04:12 zookeeper | [2024-05-02 13:01:43,895] INFO Started ServerConnector@6f53b8a{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 13:04:12 kafka | metadata.log.segment.bytes = 1073741824 13:04:12 policy-pap | sasl.login.read.timeout.ms = null 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.600+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.89864893Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=439.916µs 13:04:12 policy-db-migrator | 13:04:12 zookeeper | [2024-05-02 13:01:43,895] INFO Started @724ms (org.eclipse.jetty.server.Server) 13:04:12 kafka | metadata.log.segment.min.bytes = 8388608 13:04:12 policy-pap | sasl.login.refresh.buffer.seconds = 300 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.600+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714654936600 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.902843591Z level=info msg="Executing migration" id="Add read_only data column" 13:04:12 policy-db-migrator | 13:04:12 zookeeper | [2024-05-02 13:01:43,895] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 13:04:12 kafka | metadata.log.segment.ms = 604800000 13:04:12 policy-pap | sasl.login.refresh.min.period.seconds = 60 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.600+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-c1ea3ecb-3042-4296-b7e8-b195f884ad84-2, groupId=c1ea3ecb-3042-4296-b7e8-b195f884ad84] Subscribed to topic(s): policy-pdp-pap 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.906076487Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.232686ms 13:04:12 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 13:04:12 zookeeper | [2024-05-02 13:01:43,902] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 13:04:12 kafka | metadata.max.idle.interval.ms = 500 13:04:12 policy-pap | sasl.login.refresh.window.factor = 0.8 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.601+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=afe9684c-cc9d-425f-9511-bbe785bd0624, alive=false, publisher=null]]: starting 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.910210547Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 13:04:12 policy-db-migrator | -------------- 13:04:12 zookeeper | [2024-05-02 13:01:43,903] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 13:04:12 kafka | metadata.max.retention.bytes = 104857600 13:04:12 policy-pap | sasl.login.refresh.window.jitter = 0.05 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.614+00:00|INFO|ProducerConfig|main] ProducerConfig values: 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.910604623Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=398.396µs 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:04:12 zookeeper | [2024-05-02 13:01:43,905] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 13:04:12 kafka | metadata.max.retention.ms = 604800000 13:04:12 policy-pap | sasl.login.retry.backoff.max.ms = 10000 13:04:12 policy-apex-pdp | acks = -1 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.91390382Z level=info msg="Executing migration" id="Update json_data with nulls" 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-db-migrator | 13:04:12 zookeeper | [2024-05-02 13:01:43,907] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 13:04:12 kafka | metric.reporters = [] 13:04:12 policy-pap | sasl.login.retry.backoff.ms = 100 13:04:12 policy-apex-pdp | auto.include.jmx.reporter = true 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.914287616Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=384.916µs 13:04:12 policy-db-migrator | 13:04:12 zookeeper | [2024-05-02 13:01:43,930] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 13:04:12 kafka | metrics.num.samples = 2 13:04:12 policy-pap | sasl.mechanism = GSSAPI 13:04:12 policy-apex-pdp | batch.size = 16384 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.918614008Z level=info msg="Executing migration" id="Add uid column" 13:04:12 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 13:04:12 zookeeper | [2024-05-02 13:01:43,930] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 13:04:12 kafka | metrics.recording.level = INFO 13:04:12 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 13:04:12 policy-apex-pdp | bootstrap.servers = [kafka:9092] 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.921452959Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.838771ms 13:04:12 policy-db-migrator | -------------- 13:04:12 zookeeper | [2024-05-02 13:01:43,932] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 13:04:12 kafka | metrics.sample.window.ms = 30000 13:04:12 policy-pap | sasl.oauthbearer.expected.audience = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.924762477Z level=info msg="Executing migration" id="Update uid value" 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 13:04:12 zookeeper | [2024-05-02 13:01:43,932] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 13:04:12 kafka | min.insync.replicas = 1 13:04:12 policy-apex-pdp | buffer.memory = 33554432 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.925105452Z level=info msg="Migration successfully executed" id="Update uid value" duration=339.825µs 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.92845912Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | sasl.oauthbearer.expected.issuer = null 13:04:12 zookeeper | [2024-05-02 13:01:43,939] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 13:04:12 kafka | node.id = 1 13:04:12 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.929643257Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.185207ms 13:04:12 policy-db-migrator | 13:04:12 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:04:12 zookeeper | [2024-05-02 13:01:43,939] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 13:04:12 kafka | num.io.threads = 8 13:04:12 policy-apex-pdp | client.id = producer-1 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.933918179Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 13:04:12 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:04:12 zookeeper | [2024-05-02 13:01:43,944] INFO Snapshot loaded in 12 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 13:04:12 kafka | num.network.threads = 3 13:04:12 policy-apex-pdp | compression.type = none 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.935077106Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.162317ms 13:04:12 policy-db-migrator | 13:04:12 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:04:12 zookeeper | [2024-05-02 13:01:43,945] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 13:04:12 kafka | num.partitions = 1 13:04:12 policy-apex-pdp | connections.max.idle.ms = 540000 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.939725443Z level=info msg="Executing migration" id="create api_key table" 13:04:12 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 13:04:12 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 13:04:12 zookeeper | [2024-05-02 13:01:43,945] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 13:04:12 kafka | num.recovery.threads.per.data.dir = 1 13:04:12 policy-apex-pdp | delivery.timeout.ms = 120000 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.941339226Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.609633ms 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | sasl.oauthbearer.scope.claim.name = scope 13:04:12 zookeeper | [2024-05-02 13:01:43,957] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 13:04:12 kafka | num.replica.alter.log.dirs.threads = null 13:04:12 policy-apex-pdp | enable.idempotence = true 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.945016409Z level=info msg="Executing migration" id="add index api_key.account_id" 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 13:04:12 policy-pap | sasl.oauthbearer.sub.claim.name = sub 13:04:12 policy-pap | sasl.oauthbearer.token.endpoint.url = null 13:04:12 kafka | num.replica.fetchers = 1 13:04:12 policy-apex-pdp | interceptor.classes = [] 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.946961927Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.944818ms 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | security.protocol = PLAINTEXT 13:04:12 zookeeper | [2024-05-02 13:01:43,958] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 13:04:12 kafka | offset.metadata.max.bytes = 4096 13:04:12 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.950271005Z level=info msg="Executing migration" id="add index api_key.key" 13:04:12 policy-db-migrator | 13:04:12 policy-pap | security.providers = null 13:04:12 zookeeper | [2024-05-02 13:01:43,977] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 13:04:12 kafka | offsets.commit.required.acks = -1 13:04:12 policy-apex-pdp | linger.ms = 0 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.951430442Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.157737ms 13:04:12 policy-pap | send.buffer.bytes = 131072 13:04:12 zookeeper | [2024-05-02 13:01:43,978] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 13:04:12 kafka | offsets.commit.timeout.ms = 5000 13:04:12 policy-apex-pdp | max.block.ms = 60000 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.956919501Z level=info msg="Executing migration" id="add index api_key.account_id_name" 13:04:12 policy-pap | session.timeout.ms = 45000 13:04:12 zookeeper | [2024-05-02 13:01:45,629] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 13:04:12 kafka | offsets.load.buffer.size = 5242880 13:04:12 policy-apex-pdp | max.in.flight.requests.per.connection = 5 13:04:12 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.959577809Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=2.659848ms 13:04:12 policy-pap | socket.connection.setup.timeout.max.ms = 30000 13:04:12 kafka | offsets.retention.check.interval.ms = 600000 13:04:12 policy-apex-pdp | max.request.size = 1048576 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.964286227Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 13:04:12 policy-pap | socket.connection.setup.timeout.ms = 10000 13:04:12 kafka | offsets.retention.minutes = 10080 13:04:12 policy-apex-pdp | metadata.max.age.ms = 300000 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.965838509Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.554382ms 13:04:12 policy-pap | ssl.cipher.suites = null 13:04:12 kafka | offsets.topic.compression.codec = 0 13:04:12 policy-apex-pdp | metadata.max.idle.ms = 300000 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.969853397Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 13:04:12 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:04:12 kafka | offsets.topic.num.partitions = 50 13:04:12 policy-apex-pdp | metric.reporters = [] 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.970997224Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.144377ms 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.97419668Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 13:04:12 kafka | offsets.topic.replication.factor = 1 13:04:12 policy-apex-pdp | metrics.num.samples = 2 13:04:12 policy-db-migrator | 13:04:12 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 13:04:12 kafka | offsets.topic.segment.bytes = 104857600 13:04:12 policy-pap | ssl.endpoint.identification.algorithm = https 13:04:12 policy-apex-pdp | metrics.recording.level = INFO 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.975348997Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.152837ms 13:04:12 policy-pap | ssl.engine.factory.class = null 13:04:12 policy-apex-pdp | metrics.sample.window.ms = 30000 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 13:04:12 kafka | password.encoder.iterations = 4096 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.97907182Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 13:04:12 policy-pap | ssl.key.password = null 13:04:12 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | password.encoder.key.length = 128 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.987144657Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.095657ms 13:04:12 policy-pap | ssl.keymanager.algorithm = SunX509 13:04:12 policy-apex-pdp | partitioner.availability.timeout.ms = 0 13:04:12 policy-db-migrator | 13:04:12 kafka | password.encoder.keyfactory.algorithm = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.990609407Z level=info msg="Executing migration" id="create api_key table v2" 13:04:12 policy-pap | ssl.keystore.certificate.chain = null 13:04:12 policy-apex-pdp | partitioner.class = null 13:04:12 policy-db-migrator | 13:04:12 kafka | password.encoder.old.secret = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.991275016Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=665.309µs 13:04:12 policy-pap | ssl.keystore.key = null 13:04:12 policy-apex-pdp | partitioner.ignore.keys = false 13:04:12 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 13:04:12 kafka | password.encoder.secret = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.994035016Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 13:04:12 policy-pap | ssl.keystore.location = null 13:04:12 policy-apex-pdp | receive.buffer.bytes = 32768 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.994635625Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=600.449µs 13:04:12 policy-pap | ssl.keystore.password = null 13:04:12 policy-apex-pdp | reconnect.backoff.max.ms = 1000 13:04:12 kafka | process.roles = [] 13:04:12 policy-pap | ssl.keystore.type = JKS 13:04:12 policy-apex-pdp | reconnect.backoff.ms = 50 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:43.998775355Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 13:04:12 kafka | producer.id.expiration.check.interval.ms = 600000 13:04:12 policy-pap | ssl.protocol = TLSv1.3 13:04:12 policy-apex-pdp | request.timeout.ms = 30000 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.000089734Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.314368ms 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | producer.id.expiration.ms = 86400000 13:04:12 policy-pap | ssl.provider = null 13:04:12 policy-apex-pdp | retries = 2147483647 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.003359153Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 13:04:12 policy-db-migrator | 13:04:12 kafka | producer.purgatory.purge.interval.requests = 1000 13:04:12 policy-pap | ssl.secure.random.implementation = null 13:04:12 policy-apex-pdp | retry.backoff.ms = 100 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.004637306Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.277963ms 13:04:12 policy-db-migrator | 13:04:12 kafka | queued.max.request.bytes = -1 13:04:12 policy-pap | ssl.trustmanager.algorithm = PKIX 13:04:12 policy-apex-pdp | sasl.client.callback.handler.class = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.007984827Z level=info msg="Executing migration" id="copy api_key v1 to v2" 13:04:12 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 13:04:12 kafka | queued.max.requests = 500 13:04:12 policy-pap | ssl.truststore.certificates = null 13:04:12 policy-apex-pdp | sasl.jaas.config = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.008402975Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=417.287µs 13:04:12 kafka | quota.window.num = 11 13:04:12 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.012899176Z level=info msg="Executing migration" id="Drop old table api_key_v1" 13:04:12 policy-pap | ssl.truststore.location = null 13:04:12 kafka | quota.window.size.seconds = 1 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.013577958Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=678.172µs 13:04:12 policy-pap | ssl.truststore.password = null 13:04:12 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.016611213Z level=info msg="Executing migration" id="Update api_key table charset" 13:04:12 policy-pap | ssl.truststore.type = JKS 13:04:12 policy-apex-pdp | sasl.kerberos.service.name = null 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:04:12 kafka | remote.log.manager.task.interval.ms = 30000 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.016757816Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=145.913µs 13:04:12 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:04:12 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.019311292Z level=info msg="Executing migration" id="Add expires to api_key table" 13:04:12 policy-pap | 13:04:12 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.023476488Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=4.164396ms 13:04:12 policy-apex-pdp | sasl.login.callback.handler.class = null 13:04:12 kafka | remote.log.manager.task.retry.backoff.ms = 500 13:04:12 policy-pap | [2024-05-02T13:02:13.013+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 13:04:12 policy-apex-pdp | sasl.login.class = null 13:04:12 kafka | remote.log.manager.task.retry.jitter = 0.2 13:04:12 policy-pap | [2024-05-02T13:02:13.013+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.028117472Z level=info msg="Executing migration" id="Add service account foreign key" 13:04:12 policy-apex-pdp | sasl.login.connect.timeout.ms = null 13:04:12 kafka | remote.log.manager.thread.pool.size = 10 13:04:12 policy-pap | [2024-05-02T13:02:13.013+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714654933012 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.03078521Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.667008ms 13:04:12 policy-apex-pdp | sasl.login.read.timeout.ms = null 13:04:12 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 13:04:12 policy-pap | [2024-05-02T13:02:13.016+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-1, groupId=ad46f4cb-cb07-4411-8d0e-379eef1836ce] Subscribed to topic(s): policy-pdp-pap 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.034143421Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 13:04:12 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 13:04:12 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 13:04:12 policy-pap | [2024-05-02T13:02:13.016+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.034388765Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=246.984µs 13:04:12 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 13:04:12 policy-pap | allow.auto.create.topics = true 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.037247347Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 13:04:12 kafka | remote.log.metadata.manager.class.path = null 13:04:12 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.039959896Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.713549ms 13:04:12 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 13:04:12 policy-pap | auto.commit.interval.ms = 5000 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.04291834Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 13:04:12 kafka | remote.log.metadata.manager.listener.name = null 13:04:12 policy-pap | auto.include.jmx.reporter = true 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.04565777Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.73885ms 13:04:12 kafka | remote.log.reader.max.pending.tasks = 100 13:04:12 policy-pap | auto.offset.reset = latest 13:04:12 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.050161191Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 13:04:12 kafka | remote.log.reader.threads = 10 13:04:12 policy-pap | bootstrap.servers = [kafka:9092] 13:04:12 policy-apex-pdp | sasl.mechanism = GSSAPI 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.051079588Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=917.537µs 13:04:12 kafka | remote.log.storage.manager.class.name = null 13:04:12 policy-pap | check.crcs = true 13:04:12 policy-db-migrator | > upgrade 0570-toscadatatype.sql 13:04:12 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.05397236Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 13:04:12 kafka | remote.log.storage.manager.class.path = null 13:04:12 policy-pap | client.dns.lookup = use_all_dns_ips 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.054586271Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=613.331µs 13:04:12 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 13:04:12 policy-pap | client.id = consumer-policy-pap-2 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 13:04:12 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.057867811Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 13:04:12 kafka | remote.log.storage.system.enable = false 13:04:12 policy-pap | client.rack = 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.058779767Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=911.276µs 13:04:12 kafka | replica.fetch.backoff.ms = 1000 13:04:12 policy-pap | connections.max.idle.ms = 540000 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.062941623Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 13:04:12 kafka | replica.fetch.max.bytes = 1048576 13:04:12 policy-pap | default.api.timeout.ms = 60000 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.063838389Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=898.566µs 13:04:12 kafka | replica.fetch.min.bytes = 1 13:04:12 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 13:04:12 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.066849754Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 13:04:12 policy-pap | enable.auto.commit = true 13:04:12 kafka | replica.fetch.response.max.bytes = 10485760 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.067883302Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.033138ms 13:04:12 policy-pap | exclude.internal.topics = true 13:04:12 kafka | replica.fetch.wait.max.ms = 500 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 13:04:12 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.074066244Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 13:04:12 policy-pap | fetch.max.bytes = 52428800 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.075442569Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.376675ms 13:04:12 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 13:04:12 policy-pap | fetch.max.wait.ms = 500 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.081119092Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 13:04:12 policy-apex-pdp | security.protocol = PLAINTEXT 13:04:12 policy-pap | fetch.min.bytes = 1 13:04:12 kafka | replica.lag.time.max.ms = 30000 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.081221814Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=103.892µs 13:04:12 policy-apex-pdp | security.providers = null 13:04:12 policy-pap | group.id = policy-pap 13:04:12 kafka | replica.selector.class = null 13:04:12 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.088120069Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 13:04:12 policy-apex-pdp | send.buffer.bytes = 131072 13:04:12 policy-pap | group.instance.id = null 13:04:12 kafka | replica.socket.receive.buffer.bytes = 65536 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.08818387Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=68.051µs 13:04:12 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 13:04:12 policy-pap | heartbeat.interval.ms = 3000 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | replica.socket.timeout.ms = 30000 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.093620659Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 13:04:12 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 13:04:12 policy-pap | interceptor.classes = [] 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:04:12 kafka | replication.quota.window.num = 11 13:04:12 policy-apex-pdp | ssl.cipher.suites = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.096704395Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=3.083445ms 13:04:12 policy-pap | internal.leave.group.on.close = true 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | replication.quota.window.size.seconds = 1 13:04:12 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.101303708Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 13:04:12 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 13:04:12 policy-db-migrator | 13:04:12 kafka | request.timeout.ms = 30000 13:04:12 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.103237283Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=1.933855ms 13:04:12 policy-pap | isolation.level = read_uncommitted 13:04:12 policy-db-migrator | 13:04:12 kafka | reserved.broker.max.id = 1000 13:04:12 policy-apex-pdp | ssl.engine.factory.class = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.106241717Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 13:04:12 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:04:12 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 13:04:12 kafka | sasl.client.callback.handler.class = null 13:04:12 policy-apex-pdp | ssl.key.password = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.106295808Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=53.871µs 13:04:12 policy-pap | max.partition.fetch.bytes = 1048576 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | sasl.enabled.mechanisms = [GSSAPI] 13:04:12 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.109090329Z level=info msg="Executing migration" id="create quota table v1" 13:04:12 policy-pap | max.poll.interval.ms = 300000 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 13:04:12 kafka | sasl.jaas.config = null 13:04:12 policy-apex-pdp | ssl.keystore.certificate.chain = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.109898844Z level=info msg="Migration successfully executed" id="create quota table v1" duration=808.165µs 13:04:12 policy-pap | max.poll.records = 500 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:04:12 policy-apex-pdp | ssl.keystore.key = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.112894518Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 13:04:12 policy-pap | metadata.max.age.ms = 300000 13:04:12 policy-db-migrator | 13:04:12 kafka | sasl.kerberos.min.time.before.relogin = 60000 13:04:12 policy-apex-pdp | ssl.keystore.location = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.113760054Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=864.695µs 13:04:12 policy-pap | metric.reporters = [] 13:04:12 policy-db-migrator | 13:04:12 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 13:04:12 policy-apex-pdp | ssl.keystore.password = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.119748262Z level=info msg="Executing migration" id="Update quota table charset" 13:04:12 policy-pap | metrics.num.samples = 2 13:04:12 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 13:04:12 kafka | sasl.kerberos.service.name = null 13:04:12 policy-apex-pdp | ssl.keystore.type = JKS 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.119777253Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=30.101µs 13:04:12 policy-pap | metrics.recording.level = INFO 13:04:12 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 13:04:12 policy-apex-pdp | ssl.protocol = TLSv1.3 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.123197725Z level=info msg="Executing migration" id="create plugin_setting table" 13:04:12 policy-pap | metrics.sample.window.ms = 30000 13:04:12 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 13:04:12 policy-apex-pdp | ssl.provider = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.124080071Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=881.515µs 13:04:12 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 13:04:12 kafka | sasl.login.callback.handler.class = null 13:04:12 policy-apex-pdp | ssl.secure.random.implementation = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.12845072Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 13:04:12 policy-pap | receive.buffer.bytes = 65536 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | sasl.login.class = null 13:04:12 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.130283103Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.832113ms 13:04:12 policy-pap | reconnect.backoff.max.ms = 1000 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 13:04:12 kafka | sasl.login.connect.timeout.ms = null 13:04:12 policy-apex-pdp | ssl.truststore.certificates = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.137092676Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 13:04:12 policy-pap | reconnect.backoff.ms = 50 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | sasl.login.read.timeout.ms = null 13:04:12 policy-apex-pdp | ssl.truststore.location = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.14060531Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.511794ms 13:04:12 policy-pap | request.timeout.ms = 30000 13:04:12 policy-db-migrator | 13:04:12 kafka | sasl.login.refresh.buffer.seconds = 300 13:04:12 policy-apex-pdp | ssl.truststore.password = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.146637919Z level=info msg="Executing migration" id="Update plugin_setting table charset" 13:04:12 policy-pap | retry.backoff.ms = 100 13:04:12 policy-db-migrator | 13:04:12 kafka | sasl.login.refresh.min.period.seconds = 60 13:04:12 policy-apex-pdp | ssl.truststore.type = JKS 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.14666589Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=26.371µs 13:04:12 policy-pap | sasl.client.callback.handler.class = null 13:04:12 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 13:04:12 kafka | sasl.login.refresh.window.factor = 0.8 13:04:12 policy-apex-pdp | transaction.timeout.ms = 60000 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.14944526Z level=info msg="Executing migration" id="create session table" 13:04:12 policy-pap | sasl.jaas.config = null 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | sasl.login.refresh.window.jitter = 0.05 13:04:12 policy-apex-pdp | transactional.id = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.150115942Z level=info msg="Migration successfully executed" id="create session table" duration=670.642µs 13:04:12 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:04:12 kafka | sasl.login.retry.backoff.max.ms = 10000 13:04:12 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.156115851Z level=info msg="Executing migration" id="Drop old table playlist table" 13:04:12 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | sasl.login.retry.backoff.ms = 100 13:04:12 policy-apex-pdp | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.15659008Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=474.609µs 13:04:12 policy-pap | sasl.kerberos.service.name = null 13:04:12 policy-db-migrator | 13:04:12 kafka | sasl.mechanism.controller.protocol = GSSAPI 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.624+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.160633933Z level=info msg="Executing migration" id="Drop old table playlist_item table" 13:04:12 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 13:04:12 policy-db-migrator | 13:04:12 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.641+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.160737515Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=106.202µs 13:04:12 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 13:04:12 kafka | sasl.oauthbearer.clock.skew.seconds = 30 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.641+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.223437241Z level=info msg="Executing migration" id="create playlist table v2" 13:04:12 policy-pap | sasl.login.callback.handler.class = null 13:04:12 kafka | sasl.oauthbearer.expected.audience = null 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.641+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714654936641 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.225639921Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=2.20364ms 13:04:12 policy-pap | sasl.login.class = null 13:04:12 policy-db-migrator | > upgrade 0630-toscanodetype.sql 13:04:12 kafka | sasl.oauthbearer.expected.issuer = null 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.642+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=afe9684c-cc9d-425f-9511-bbe785bd0624, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.230749813Z level=info msg="Executing migration" id="create playlist item table v2" 13:04:12 policy-pap | sasl.login.connect.timeout.ms = null 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.642+00:00|INFO|ServiceManager|main] service manager starting set alive 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.232112758Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.365045ms 13:04:12 policy-pap | sasl.login.read.timeout.ms = null 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 13:04:12 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.642+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.255986711Z level=info msg="Executing migration" id="Update playlist table charset" 13:04:12 policy-pap | sasl.login.refresh.buffer.seconds = 300 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.644+00:00|INFO|ServiceManager|main] service manager starting topic sinks 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.256046912Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=60.911µs 13:04:12 policy-pap | sasl.login.refresh.min.period.seconds = 60 13:04:12 policy-db-migrator | 13:04:12 kafka | sasl.oauthbearer.jwks.endpoint.url = null 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.645+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.263384805Z level=info msg="Executing migration" id="Update playlist_item table charset" 13:04:12 policy-pap | sasl.login.refresh.window.factor = 0.8 13:04:12 policy-db-migrator | 13:04:12 kafka | sasl.oauthbearer.scope.claim.name = scope 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.646+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.263420615Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=37.1µs 13:04:12 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 13:04:12 kafka | sasl.oauthbearer.sub.claim.name = sub 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.647+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 13:04:12 policy-pap | sasl.login.refresh.window.jitter = 0.05 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.266335668Z level=info msg="Executing migration" id="Add playlist column created_at" 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | sasl.oauthbearer.token.endpoint.url = null 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.647+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 13:04:12 policy-pap | sasl.login.retry.backoff.max.ms = 10000 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.270567445Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.229587ms 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 13:04:12 kafka | sasl.server.callback.handler.class = null 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.647+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c1ea3ecb-3042-4296-b7e8-b195f884ad84, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@60a2630a 13:04:12 policy-pap | sasl.login.retry.backoff.ms = 100 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.280050997Z level=info msg="Executing migration" id="Add playlist column updated_at" 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | sasl.server.max.receive.size = 524288 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.647+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c1ea3ecb-3042-4296-b7e8-b195f884ad84, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 13:04:12 policy-pap | sasl.mechanism = GSSAPI 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.282775716Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.724379ms 13:04:12 policy-db-migrator | 13:04:12 kafka | security.inter.broker.protocol = PLAINTEXT 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.647+00:00|INFO|ServiceManager|main] service manager starting Create REST server 13:04:12 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.285783161Z level=info msg="Executing migration" id="drop preferences table v2" 13:04:12 kafka | security.providers = null 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.663+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 13:04:12 policy-pap | sasl.oauthbearer.expected.audience = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.285868482Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=85.882µs 13:04:12 kafka | server.max.startup.time.ms = 9223372036854775807 13:04:12 policy-apex-pdp | [] 13:04:12 policy-pap | sasl.oauthbearer.expected.issuer = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.289328085Z level=info msg="Executing migration" id="drop preferences table v3" 13:04:12 kafka | socket.connection.setup.timeout.max.ms = 30000 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.666+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 13:04:12 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.289412026Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=84.661µs 13:04:12 kafka | socket.connection.setup.timeout.ms = 10000 13:04:12 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.292321159Z level=info msg="Executing migration" id="create preferences table v3" 13:04:12 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"752713cd-7375-4df8-9e2b-db44444060b7","timestampMs":1714654936647,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup"} 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.293126964Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=805.745µs 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.826+00:00|INFO|ServiceManager|main] service manager starting Rest Server 13:04:12 kafka | socket.listen.backlog.size = 50 13:04:12 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.298117524Z level=info msg="Executing migration" id="Update preferences table charset" 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.826+00:00|INFO|ServiceManager|main] service manager starting 13:04:12 kafka | socket.receive.buffer.bytes = 102400 13:04:12 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 13:04:12 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.298149745Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=31.151µs 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.826+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 13:04:12 kafka | socket.request.max.bytes = 104857600 13:04:12 policy-pap | sasl.oauthbearer.scope.claim.name = scope 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.301427394Z level=info msg="Executing migration" id="Add column team_id in preferences" 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.826+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:04:12 kafka | socket.send.buffer.bytes = 102400 13:04:12 policy-pap | sasl.oauthbearer.sub.claim.name = sub 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.307146338Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=5.718094ms 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.837+00:00|INFO|ServiceManager|main] service manager started 13:04:12 kafka | ssl.cipher.suites = [] 13:04:12 policy-pap | sasl.oauthbearer.token.endpoint.url = null 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.310781813Z level=info msg="Executing migration" id="Update team_id column values in preferences" 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.837+00:00|INFO|ServiceManager|main] service manager started 13:04:12 kafka | ssl.client.auth = none 13:04:12 policy-pap | security.protocol = PLAINTEXT 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.311029038Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=247.385µs 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.837+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 13:04:12 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:04:12 policy-pap | security.providers = null 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.318581895Z level=info msg="Executing migration" id="Add column week_start in preferences" 13:04:12 kafka | ssl.endpoint.identification.algorithm = https 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.838+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:04:12 policy-pap | send.buffer.bytes = 131072 13:04:12 policy-db-migrator | > upgrade 0660-toscaparameter.sql 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.321844634Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.262489ms 13:04:12 kafka | ssl.engine.factory.class = null 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.977+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 241kjIVNQKeIb2Rrsc8nPA 13:04:12 policy-pap | session.timeout.ms = 45000 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.326450337Z level=info msg="Executing migration" id="Add column preferences.json_data" 13:04:12 kafka | ssl.key.password = null 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.977+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c1ea3ecb-3042-4296-b7e8-b195f884ad84-2, groupId=c1ea3ecb-3042-4296-b7e8-b195f884ad84] Cluster ID: 241kjIVNQKeIb2Rrsc8nPA 13:04:12 policy-pap | socket.connection.setup.timeout.max.ms = 30000 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.333696399Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=7.266182ms 13:04:12 kafka | ssl.keymanager.algorithm = SunX509 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.978+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 13:04:12 policy-pap | socket.connection.setup.timeout.ms = 10000 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.33708223Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 13:04:12 kafka | ssl.keystore.certificate.chain = null 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.978+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c1ea3ecb-3042-4296-b7e8-b195f884ad84-2, groupId=c1ea3ecb-3042-4296-b7e8-b195f884ad84] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 13:04:12 policy-pap | ssl.cipher.suites = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.337159001Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=77.341µs 13:04:12 kafka | ssl.keystore.key = null 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | [2024-05-02T13:02:16.987+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c1ea3ecb-3042-4296-b7e8-b195f884ad84-2, groupId=c1ea3ecb-3042-4296-b7e8-b195f884ad84] (Re-)joining group 13:04:12 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.352816395Z level=info msg="Executing migration" id="Add preferences index org_id" 13:04:12 kafka | ssl.keystore.location = null 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | [2024-05-02T13:02:17.003+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c1ea3ecb-3042-4296-b7e8-b195f884ad84-2, groupId=c1ea3ecb-3042-4296-b7e8-b195f884ad84] Request joining group due to: need to re-join with the given member-id: consumer-c1ea3ecb-3042-4296-b7e8-b195f884ad84-2-159e516d-7b31-454d-a5ae-5ba81c7e6592 13:04:12 policy-pap | ssl.endpoint.identification.algorithm = https 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.355128977Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=2.175789ms 13:04:12 kafka | ssl.keystore.password = null 13:04:12 policy-db-migrator | > upgrade 0670-toscapolicies.sql 13:04:12 policy-apex-pdp | [2024-05-02T13:02:17.003+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c1ea3ecb-3042-4296-b7e8-b195f884ad84-2, groupId=c1ea3ecb-3042-4296-b7e8-b195f884ad84] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 13:04:12 policy-pap | ssl.engine.factory.class = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.363189103Z level=info msg="Executing migration" id="Add preferences index user_id" 13:04:12 kafka | ssl.keystore.type = JKS 13:04:12 policy-apex-pdp | [2024-05-02T13:02:17.003+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c1ea3ecb-3042-4296-b7e8-b195f884ad84-2, groupId=c1ea3ecb-3042-4296-b7e8-b195f884ad84] (Re-)joining group 13:04:12 policy-pap | ssl.key.password = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.364301833Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.11498ms 13:04:12 kafka | ssl.principal.mapping.rules = DEFAULT 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | [2024-05-02T13:02:17.466+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 13:04:12 kafka | ssl.protocol = TLSv1.3 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 13:04:12 policy-apex-pdp | [2024-05-02T13:02:17.469+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 13:04:12 policy-pap | ssl.keymanager.algorithm = SunX509 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.367786586Z level=info msg="Executing migration" id="create alert table v1" 13:04:12 kafka | ssl.provider = null 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | [2024-05-02T13:02:20.010+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c1ea3ecb-3042-4296-b7e8-b195f884ad84-2, groupId=c1ea3ecb-3042-4296-b7e8-b195f884ad84] Successfully joined group with generation Generation{generationId=1, memberId='consumer-c1ea3ecb-3042-4296-b7e8-b195f884ad84-2-159e516d-7b31-454d-a5ae-5ba81c7e6592', protocol='range'} 13:04:12 policy-pap | ssl.keystore.certificate.chain = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.369011059Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.223973ms 13:04:12 kafka | ssl.secure.random.implementation = null 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | [2024-05-02T13:02:20.018+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c1ea3ecb-3042-4296-b7e8-b195f884ad84-2, groupId=c1ea3ecb-3042-4296-b7e8-b195f884ad84] Finished assignment for group at generation 1: {consumer-c1ea3ecb-3042-4296-b7e8-b195f884ad84-2-159e516d-7b31-454d-a5ae-5ba81c7e6592=Assignment(partitions=[policy-pdp-pap-0])} 13:04:12 policy-pap | ssl.keystore.key = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.374003129Z level=info msg="Executing migration" id="add index alert org_id & id " 13:04:12 kafka | ssl.trustmanager.algorithm = PKIX 13:04:12 policy-apex-pdp | [2024-05-02T13:02:20.027+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c1ea3ecb-3042-4296-b7e8-b195f884ad84-2, groupId=c1ea3ecb-3042-4296-b7e8-b195f884ad84] Successfully synced group in generation Generation{generationId=1, memberId='consumer-c1ea3ecb-3042-4296-b7e8-b195f884ad84-2-159e516d-7b31-454d-a5ae-5ba81c7e6592', protocol='range'} 13:04:12 policy-pap | ssl.keystore.location = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.374998787Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=993.288µs 13:04:12 kafka | ssl.truststore.certificates = null 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | [2024-05-02T13:02:20.028+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c1ea3ecb-3042-4296-b7e8-b195f884ad84-2, groupId=c1ea3ecb-3042-4296-b7e8-b195f884ad84] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 13:04:12 policy-pap | ssl.keystore.password = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.380086449Z level=info msg="Executing migration" id="add index alert state" 13:04:12 kafka | ssl.truststore.location = null 13:04:12 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 13:04:12 policy-apex-pdp | [2024-05-02T13:02:20.030+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c1ea3ecb-3042-4296-b7e8-b195f884ad84-2, groupId=c1ea3ecb-3042-4296-b7e8-b195f884ad84] Adding newly assigned partitions: policy-pdp-pap-0 13:04:12 policy-pap | ssl.keystore.type = JKS 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.381971743Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.889584ms 13:04:12 kafka | ssl.truststore.password = null 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | [2024-05-02T13:02:20.048+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c1ea3ecb-3042-4296-b7e8-b195f884ad84-2, groupId=c1ea3ecb-3042-4296-b7e8-b195f884ad84] Found no committed offset for partition policy-pdp-pap-0 13:04:12 policy-pap | ssl.protocol = TLSv1.3 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.387964582Z level=info msg="Executing migration" id="add index alert dashboard_id" 13:04:12 kafka | ssl.truststore.type = JKS 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:04:12 policy-apex-pdp | [2024-05-02T13:02:20.065+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c1ea3ecb-3042-4296-b7e8-b195f884ad84-2, groupId=c1ea3ecb-3042-4296-b7e8-b195f884ad84] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 13:04:12 policy-pap | ssl.provider = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.389809515Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.847653ms 13:04:12 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 13:04:12 policy-apex-pdp | [2024-05-02T13:02:36.647+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 13:04:12 policy-pap | ssl.secure.random.implementation = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.396556748Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 13:04:12 kafka | transaction.max.timeout.ms = 900000 13:04:12 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"9e2c55c4-d511-4033-a47b-7bb40f039690","timestampMs":1714654956647,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup"} 13:04:12 policy-pap | ssl.trustmanager.algorithm = PKIX 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.397941903Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.386995ms 13:04:12 kafka | transaction.partition.verification.enable = true 13:04:12 policy-apex-pdp | [2024-05-02T13:02:36.677+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:04:12 policy-pap | ssl.truststore.certificates = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.401785352Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 13:04:12 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 13:04:12 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"9e2c55c4-d511-4033-a47b-7bb40f039690","timestampMs":1714654956647,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup"} 13:04:12 policy-pap | ssl.truststore.location = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.403706487Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.917525ms 13:04:12 kafka | transaction.state.log.load.buffer.size = 5242880 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | [2024-05-02T13:02:36.681+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 13:04:12 policy-pap | ssl.truststore.password = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.40935357Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 13:04:12 kafka | transaction.state.log.min.isr = 2 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | [2024-05-02T13:02:36.842+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:04:12 policy-pap | ssl.truststore.type = JKS 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.410742545Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.392986ms 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | {"source":"pap-8314741a-bad7-42f4-9d4c-45e5809d9dbb","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f79c0c93-5a17-4ce9-a84b-7ac1595d4fe0","timestampMs":1714654956761,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:04:12 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:04:12 kafka | transaction.state.log.num.partitions = 50 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.417975076Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 13:04:12 policy-db-migrator | > upgrade 0690-toscapolicy.sql 13:04:12 policy-apex-pdp | [2024-05-02T13:02:36.857+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 13:04:12 policy-pap | 13:04:12 kafka | transaction.state.log.replication.factor = 3 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.431308467Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=13.330031ms 13:04:12 policy-apex-pdp | [2024-05-02T13:02:36.857+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 13:04:12 kafka | transaction.state.log.segment.bytes = 104857600 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.436795397Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 13:04:12 policy-pap | [2024-05-02T13:02:13.022+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"5d086639-f41e-480d-9e01-c44c133be1a9","timestampMs":1714654956857,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup"} 13:04:12 kafka | transactional.id.expiration.ms = 604800000 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.43807832Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=1.289203ms 13:04:12 policy-pap | [2024-05-02T13:02:13.022+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 13:04:12 policy-apex-pdp | [2024-05-02T13:02:36.858+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 13:04:12 kafka | unclean.leader.election.enable = false 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.441822228Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 13:04:12 policy-pap | [2024-05-02T13:02:13.022+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714654933022 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 13:04:12 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f79c0c93-5a17-4ce9-a84b-7ac1595d4fe0","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"b5e972a4-d4dc-44d3-b1ee-eeb5e46402bb","timestampMs":1714654956858,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:04:12 kafka | unstable.api.versions.enable = false 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.442698714Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=876.776µs 13:04:12 policy-pap | [2024-05-02T13:02:13.022+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-apex-pdp | [2024-05-02T13:02:36.878+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:04:12 kafka | zookeeper.clientCnxnSocket = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.445631037Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 13:04:12 policy-pap | [2024-05-02T13:02:13.340+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"5d086639-f41e-480d-9e01-c44c133be1a9","timestampMs":1714654956857,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup"} 13:04:12 kafka | zookeeper.connect = zookeeper:2181 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.44582767Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=196.773µs 13:04:12 policy-apex-pdp | [2024-05-02T13:02:36.879+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 13:04:12 policy-pap | [2024-05-02T13:02:13.503+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 13:04:12 kafka | zookeeper.connection.timeout.ms = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.450352112Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 13:04:12 policy-db-migrator | 13:04:12 policy-apex-pdp | [2024-05-02T13:02:36.885+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:04:12 kafka | zookeeper.max.in.flight.requests = 10 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.45076072Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=408.528µs 13:04:12 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 13:04:12 policy-pap | [2024-05-02T13:02:13.772+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@5ae16aa, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@344a065a, org.springframework.security.web.context.SecurityContextHolderFilter@14bf9fd0, org.springframework.security.web.header.HeaderWriterFilter@733bd6f3, org.springframework.security.web.authentication.logout.LogoutFilter@2e7517aa, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@50e24ea4, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@f4d391c, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@2fa2143d, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@3051e476, org.springframework.security.web.access.ExceptionTranslationFilter@29ee8174, org.springframework.security.web.access.intercept.AuthorizationFilter@8c18bde] 13:04:12 kafka | zookeeper.metadata.migration.enable = false 13:04:12 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f79c0c93-5a17-4ce9-a84b-7ac1595d4fe0","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"b5e972a4-d4dc-44d3-b1ee-eeb5e46402bb","timestampMs":1714654956858,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.45575572Z level=info msg="Executing migration" id="create alert_notification table v1" 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | [2024-05-02T13:02:14.574+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 13:04:12 kafka | zookeeper.metadata.migration.min.batch.size = 200 13:04:12 policy-apex-pdp | [2024-05-02T13:02:36.886+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.45738507Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.63449ms 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 13:04:12 policy-pap | [2024-05-02T13:02:14.678+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 13:04:12 kafka | zookeeper.session.timeout.ms = 18000 13:04:12 policy-apex-pdp | [2024-05-02T13:02:36.930+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.460466436Z level=info msg="Executing migration" id="Add column is_default" 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | [2024-05-02T13:02:14.692+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 13:04:12 kafka | zookeeper.set.acl = false 13:04:12 policy-apex-pdp | {"source":"pap-8314741a-bad7-42f4-9d4c-45e5809d9dbb","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"80421e28-7b2d-4b5e-9bea-d992b890a3dd","timestampMs":1714654956761,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.468334738Z level=info msg="Migration successfully executed" id="Add column is_default" duration=7.867532ms 13:04:12 policy-db-migrator | 13:04:12 policy-pap | [2024-05-02T13:02:14.712+00:00|INFO|ServiceManager|main] Policy PAP starting 13:04:12 kafka | zookeeper.ssl.cipher.suites = null 13:04:12 policy-apex-pdp | [2024-05-02T13:02:36.932+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.474593822Z level=info msg="Executing migration" id="Add column frequency" 13:04:12 policy-db-migrator | 13:04:12 policy-pap | [2024-05-02T13:02:14.712+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 13:04:12 kafka | zookeeper.ssl.client.enable = false 13:04:12 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"80421e28-7b2d-4b5e-9bea-d992b890a3dd","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"36edb522-a0ae-4fef-a0d3-0df22ea01716","timestampMs":1714654956932,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.478619935Z level=info msg="Migration successfully executed" id="Add column frequency" duration=4.031253ms 13:04:12 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 13:04:12 policy-pap | [2024-05-02T13:02:14.713+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 13:04:12 kafka | zookeeper.ssl.crl.enable = false 13:04:12 policy-apex-pdp | [2024-05-02T13:02:36.944+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.486871764Z level=info msg="Executing migration" id="Add column send_reminder" 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | [2024-05-02T13:02:14.714+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 13:04:12 kafka | zookeeper.ssl.enabled.protocols = null 13:04:12 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"80421e28-7b2d-4b5e-9bea-d992b890a3dd","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"36edb522-a0ae-4fef-a0d3-0df22ea01716","timestampMs":1714654956932,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.49048241Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.610286ms 13:04:12 policy-pap | [2024-05-02T13:02:14.714+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 13:04:12 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 13:04:12 policy-apex-pdp | [2024-05-02T13:02:36.944+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.49547168Z level=info msg="Executing migration" id="Add column disable_resolve_message" 13:04:12 kafka | zookeeper.ssl.keystore.location = null 13:04:12 policy-apex-pdp | [2024-05-02T13:02:36.974+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:04:12 policy-pap | [2024-05-02T13:02:14.714+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 13:04:12 kafka | zookeeper.ssl.keystore.password = null 13:04:12 policy-apex-pdp | {"source":"pap-8314741a-bad7-42f4-9d4c-45e5809d9dbb","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"9b29a1ef-f55b-45e6-9606-30cb56a7910e","timestampMs":1714654956943,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.498962323Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.490773ms 13:04:12 kafka | zookeeper.ssl.keystore.type = null 13:04:12 policy-apex-pdp | [2024-05-02T13:02:36.976+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 13:04:12 policy-pap | [2024-05-02T13:02:14.714+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.502056899Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 13:04:12 kafka | zookeeper.ssl.ocsp.enable = false 13:04:12 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"9b29a1ef-f55b-45e6-9606-30cb56a7910e","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"2196446e-aee6-4b11-8da2-3f27281f9ed9","timestampMs":1714654956975,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | [2024-05-02T13:02:14.716+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=ad46f4cb-cb07-4411-8d0e-379eef1836ce, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@237d0625 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.503122569Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.06513ms 13:04:12 kafka | zookeeper.ssl.protocol = TLSv1.2 13:04:12 policy-apex-pdp | [2024-05-02T13:02:36.985+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:04:12 policy-db-migrator | 13:04:12 policy-pap | [2024-05-02T13:02:14.729+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=ad46f4cb-cb07-4411-8d0e-379eef1836ce, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.508651669Z level=info msg="Executing migration" id="Update alert table charset" 13:04:12 kafka | zookeeper.ssl.truststore.location = null 13:04:12 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"9b29a1ef-f55b-45e6-9606-30cb56a7910e","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"2196446e-aee6-4b11-8da2-3f27281f9ed9","timestampMs":1714654956975,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:04:12 policy-db-migrator | 13:04:12 policy-pap | [2024-05-02T13:02:14.729+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.50869957Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=52.381µs 13:04:12 kafka | zookeeper.ssl.truststore.password = null 13:04:12 policy-apex-pdp | [2024-05-02T13:02:36.985+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 13:04:12 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 13:04:12 policy-pap | allow.auto.create.topics = true 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.516271777Z level=info msg="Executing migration" id="Update alert_notification table charset" 13:04:12 kafka | zookeeper.ssl.truststore.type = null 13:04:12 policy-apex-pdp | [2024-05-02T13:02:56.170+00:00|INFO|RequestLog|qtp739264372-31] 172.17.0.2 - policyadmin [02/May/2024:13:02:56 +0000] "GET /metrics HTTP/1.1" 200 10645 "-" "Prometheus/2.51.2" 13:04:12 policy-pap | auto.commit.interval.ms = 5000 13:04:12 kafka | (kafka.server.KafkaConfig) 13:04:12 policy-apex-pdp | [2024-05-02T13:03:56.085+00:00|INFO|RequestLog|qtp739264372-29] 172.17.0.2 - policyadmin [02/May/2024:13:03:56 +0000] "GET /metrics HTTP/1.1" 200 10644 "-" "Prometheus/2.51.2" 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.516304507Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=31.08µs 13:04:12 kafka | [2024-05-02 13:01:47,485] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.520094346Z level=info msg="Executing migration" id="create notification_journal table v1" 13:04:12 kafka | [2024-05-02 13:01:47,487] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 13:04:12 policy-pap | auto.include.jmx.reporter = true 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.52086242Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=767.984µs 13:04:12 kafka | [2024-05-02 13:01:47,490] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 13:04:12 policy-pap | auto.offset.reset = latest 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.523842124Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 13:04:12 kafka | [2024-05-02 13:01:47,492] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 13:04:12 policy-pap | bootstrap.servers = [kafka:9092] 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.52470466Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=862.016µs 13:04:12 kafka | [2024-05-02 13:01:47,528] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 13:04:12 policy-pap | check.crcs = true 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.531002614Z level=info msg="Executing migration" id="drop alert_notification_journal" 13:04:12 kafka | [2024-05-02 13:01:47,564] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 13:04:12 policy-pap | client.dns.lookup = use_all_dns_ips 13:04:12 policy-db-migrator | > upgrade 0730-toscaproperty.sql 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.532672334Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.66873ms 13:04:12 kafka | [2024-05-02 13:01:47,574] INFO Loaded 0 logs in 46ms (kafka.log.LogManager) 13:04:12 policy-pap | client.id = consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:01:47,576] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.537584423Z level=info msg="Executing migration" id="create alert_notification_state table v1" 13:04:12 policy-pap | client.rack = 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:04:12 kafka | [2024-05-02 13:01:47,577] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.539397546Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.810473ms 13:04:12 policy-pap | connections.max.idle.ms = 540000 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:01:47,590] INFO Starting the log cleaner (kafka.log.LogCleaner) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.546600856Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 13:04:12 policy-pap | default.api.timeout.ms = 60000 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:01:47,643] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.547564654Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=963.738µs 13:04:12 policy-pap | enable.auto.commit = true 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:01:47,661] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.550790842Z level=info msg="Executing migration" id="Add for to alert table" 13:04:12 policy-pap | exclude.internal.topics = true 13:04:12 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 13:04:12 kafka | [2024-05-02 13:01:47,681] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.5545168Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.724868ms 13:04:12 policy-pap | fetch.max.bytes = 52428800 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:01:47,731] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.557599076Z level=info msg="Executing migration" id="Add column uid in alert_notification" 13:04:12 policy-pap | fetch.max.wait.ms = 500 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 13:04:12 kafka | [2024-05-02 13:01:48,058] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.561360544Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.761078ms 13:04:12 policy-pap | fetch.min.bytes = 1 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:01:48,080] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.567839411Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 13:04:12 policy-pap | group.id = ad46f4cb-cb07-4411-8d0e-379eef1836ce 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:01:48,080] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.568020645Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=181.713µs 13:04:12 policy-pap | group.instance.id = null 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:01:48,086] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.570762894Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 13:04:12 policy-pap | heartbeat.interval.ms = 3000 13:04:12 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.57161842Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=854.866µs 13:04:12 policy-pap | interceptor.classes = [] 13:04:12 kafka | [2024-05-02 13:01:48,092] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.575131173Z level=info msg="Executing migration" id="Remove unique index org_id_name" 13:04:12 policy-pap | internal.leave.group.on.close = true 13:04:12 kafka | [2024-05-02 13:01:48,118] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.576167282Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.038909ms 13:04:12 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 13:04:12 kafka | [2024-05-02 13:01:48,120] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.583119578Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 13:04:12 policy-pap | isolation.level = read_uncommitted 13:04:12 kafka | [2024-05-02 13:01:48,121] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.588787471Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=5.668503ms 13:04:12 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:04:12 kafka | [2024-05-02 13:01:48,122] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.591934138Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 13:04:12 policy-pap | max.partition.fetch.bytes = 1048576 13:04:12 kafka | [2024-05-02 13:01:48,124] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:04:12 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.592003099Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=69.111µs 13:04:12 policy-pap | max.poll.interval.ms = 300000 13:04:12 kafka | [2024-05-02 13:01:48,137] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.596125634Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 13:04:12 policy-pap | max.poll.records = 500 13:04:12 kafka | [2024-05-02 13:01:48,138] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.59704921Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=928.356µs 13:04:12 policy-pap | metadata.max.age.ms = 300000 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.603010859Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 13:04:12 kafka | [2024-05-02 13:01:48,164] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 13:04:12 policy-pap | metric.reporters = [] 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.603892064Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=893.216µs 13:04:12 kafka | [2024-05-02 13:01:48,190] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1714654908178,1714654908178,1,0,0,72057610435887105,258,0,27 13:04:12 policy-pap | metrics.num.samples = 2 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.606790417Z level=info msg="Executing migration" id="Drop old annotation table v4" 13:04:12 kafka | (kafka.zk.KafkaZkClient) 13:04:12 policy-pap | metrics.recording.level = INFO 13:04:12 policy-db-migrator | > upgrade 0770-toscarequirement.sql 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.606907179Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=117.002µs 13:04:12 kafka | [2024-05-02 13:01:48,192] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 13:04:12 policy-pap | metrics.sample.window.ms = 30000 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.650229784Z level=info msg="Executing migration" id="create annotation table v5" 13:04:12 kafka | [2024-05-02 13:01:48,252] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 13:04:12 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.652285361Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=2.057707ms 13:04:12 kafka | [2024-05-02 13:01:48,259] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:04:12 policy-pap | receive.buffer.bytes = 65536 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.659801718Z level=info msg="Executing migration" id="add index annotation 0 v3" 13:04:12 kafka | [2024-05-02 13:01:48,267] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:04:12 policy-pap | reconnect.backoff.max.ms = 1000 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.660746865Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=947.138µs 13:04:12 kafka | [2024-05-02 13:01:48,270] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:04:12 policy-pap | reconnect.backoff.ms = 50 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.664313429Z level=info msg="Executing migration" id="add index annotation 1 v3" 13:04:12 kafka | [2024-05-02 13:01:48,274] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 13:04:12 policy-pap | request.timeout.ms = 30000 13:04:12 policy-db-migrator | > upgrade 0780-toscarequirements.sql 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.665141584Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=827.575µs 13:04:12 kafka | [2024-05-02 13:01:48,286] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 13:04:12 policy-pap | retry.backoff.ms = 100 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.669274589Z level=info msg="Executing migration" id="add index annotation 2 v3" 13:04:12 kafka | [2024-05-02 13:01:48,289] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 13:04:12 policy-pap | sasl.client.callback.handler.class = null 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.670901719Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.63688ms 13:04:12 kafka | [2024-05-02 13:01:48,294] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 13:04:12 policy-pap | sasl.jaas.config = null 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.676999849Z level=info msg="Executing migration" id="add index annotation 3 v3" 13:04:12 kafka | [2024-05-02 13:01:48,294] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 13:04:12 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.677743093Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=743.254µs 13:04:12 kafka | [2024-05-02 13:01:48,303] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 13:04:12 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.686912039Z level=info msg="Executing migration" id="add index annotation 4 v3" 13:04:12 kafka | [2024-05-02 13:01:48,311] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 13:04:12 policy-pap | sasl.kerberos.service.name = null 13:04:12 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.687766014Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=853.885µs 13:04:12 kafka | [2024-05-02 13:01:48,315] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 13:04:12 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.691092265Z level=info msg="Executing migration" id="Update annotation table charset" 13:04:12 kafka | [2024-05-02 13:01:48,315] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 13:04:12 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.691113765Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=22.311µs 13:04:12 kafka | [2024-05-02 13:01:48,339] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 13:04:12 policy-pap | sasl.login.callback.handler.class = null 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.696470512Z level=info msg="Executing migration" id="Add column region_id to annotation table" 13:04:12 kafka | [2024-05-02 13:01:48,339] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 13:04:12 policy-pap | sasl.login.class = null 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.699559778Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.085626ms 13:04:12 kafka | [2024-05-02 13:01:48,346] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 13:04:12 policy-pap | sasl.login.connect.timeout.ms = null 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.703936557Z level=info msg="Executing migration" id="Drop category_id index" 13:04:12 kafka | [2024-05-02 13:01:48,350] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 13:04:12 policy-pap | sasl.login.read.timeout.ms = null 13:04:12 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.704546818Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=610.051µs 13:04:12 kafka | [2024-05-02 13:01:48,355] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.707534602Z level=info msg="Executing migration" id="Add column tags to annotation table" 13:04:12 kafka | [2024-05-02 13:01:48,370] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:04:12 policy-pap | sasl.login.refresh.buffer.seconds = 300 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.710280762Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=2.74593ms 13:04:12 kafka | [2024-05-02 13:01:48,375] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 13:04:12 policy-pap | sasl.login.refresh.min.period.seconds = 60 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.713787676Z level=info msg="Executing migration" id="Create annotation_tag table v2" 13:04:12 kafka | [2024-05-02 13:01:48,381] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 13:04:12 policy-pap | sasl.login.refresh.window.factor = 0.8 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.714311965Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=521.469µs 13:04:12 kafka | [2024-05-02 13:01:48,393] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 13:04:12 policy-pap | sasl.login.refresh.window.jitter = 0.05 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.719103382Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 13:04:12 kafka | [2024-05-02 13:01:48,410] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 13:04:12 policy-pap | sasl.login.retry.backoff.max.ms = 10000 13:04:12 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.719767124Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=661.532µs 13:04:12 kafka | [2024-05-02 13:01:48,420] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 13:04:12 policy-pap | sasl.login.retry.backoff.ms = 100 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.723097924Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 13:04:12 kafka | [2024-05-02 13:01:48,422] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 13:04:12 policy-pap | sasl.mechanism = GSSAPI 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.723721616Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=621.142µs 13:04:12 kafka | [2024-05-02 13:01:48,424] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 13:04:12 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.727096587Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 13:04:12 kafka | [2024-05-02 13:01:48,425] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 13:04:12 policy-pap | sasl.oauthbearer.expected.audience = null 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:01:48,425] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.735430728Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=8.332091ms 13:04:12 policy-pap | sasl.oauthbearer.expected.issuer = null 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:01:48,425] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.741535579Z level=info msg="Executing migration" id="Create annotation_tag table v3" 13:04:12 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:04:12 policy-db-migrator | > upgrade 0820-toscatrigger.sql 13:04:12 kafka | [2024-05-02 13:01:48,426] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.742050878Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=515.63µs 13:04:12 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:01:48,428] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.745025742Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 13:04:12 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:04:12 kafka | [2024-05-02 13:01:48,429] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 13:04:12 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:04:12 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 13:04:12 kafka | [2024-05-02 13:01:48,430] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.745638813Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=612.821µs 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | sasl.oauthbearer.scope.claim.name = scope 13:04:12 kafka | [2024-05-02 13:01:48,430] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.748643357Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 13:04:12 policy-db-migrator | 13:04:12 policy-pap | sasl.oauthbearer.sub.claim.name = sub 13:04:12 kafka | [2024-05-02 13:01:48,431] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.748843921Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=200.104µs 13:04:12 policy-db-migrator | 13:04:12 policy-pap | sasl.oauthbearer.token.endpoint.url = null 13:04:12 kafka | [2024-05-02 13:01:48,432] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.753902133Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 13:04:12 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 13:04:12 policy-pap | security.protocol = PLAINTEXT 13:04:12 kafka | [2024-05-02 13:01:48,436] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.75429121Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=388.797µs 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | security.providers = null 13:04:12 kafka | [2024-05-02 13:01:48,446] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.758911193Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 13:04:12 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 13:04:12 policy-pap | send.buffer.bytes = 131072 13:04:12 kafka | [2024-05-02 13:01:48,447] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.759065516Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=154.073µs 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | session.timeout.ms = 45000 13:04:12 kafka | [2024-05-02 13:01:48,449] INFO Kafka version: 7.6.1-ccs (org.apache.kafka.common.utils.AppInfoParser) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.762122282Z level=info msg="Executing migration" id="Add created time to annotation table" 13:04:12 policy-db-migrator | 13:04:12 policy-pap | socket.connection.setup.timeout.max.ms = 30000 13:04:12 kafka | [2024-05-02 13:01:48,450] INFO Kafka commitId: 11e81ad2a49db00b1d2b8c731409cd09e563de67 (org.apache.kafka.common.utils.AppInfoParser) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.764982953Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=2.860191ms 13:04:12 policy-db-migrator | 13:04:12 policy-pap | socket.connection.setup.timeout.ms = 10000 13:04:12 kafka | [2024-05-02 13:01:48,450] INFO Kafka startTimeMs: 1714654908437 (org.apache.kafka.common.utils.AppInfoParser) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.772669263Z level=info msg="Executing migration" id="Add updated time to annotation table" 13:04:12 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 13:04:12 policy-pap | ssl.cipher.suites = null 13:04:12 kafka | [2024-05-02 13:01:48,455] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.776169676Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.498263ms 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:04:12 kafka | [2024-05-02 13:01:48,456] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.782348238Z level=info msg="Executing migration" id="Add index for created in annotation table" 13:04:12 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 13:04:12 policy-pap | ssl.endpoint.identification.algorithm = https 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.78302456Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=675.922µs 13:04:12 kafka | [2024-05-02 13:01:48,457] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | ssl.engine.factory.class = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.786049405Z level=info msg="Executing migration" id="Add index for updated in annotation table" 13:04:12 kafka | [2024-05-02 13:01:48,457] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 13:04:12 policy-db-migrator | 13:04:12 policy-pap | ssl.key.password = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.786690617Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=640.782µs 13:04:12 kafka | [2024-05-02 13:01:48,458] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 13:04:12 policy-db-migrator | 13:04:12 policy-pap | ssl.keymanager.algorithm = SunX509 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.790826822Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 13:04:12 kafka | [2024-05-02 13:01:48,462] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 13:04:12 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 13:04:12 policy-pap | ssl.keystore.certificate.chain = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.790980784Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=170.343µs 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.794098451Z level=info msg="Executing migration" id="Add epoch_end column" 13:04:12 kafka | [2024-05-02 13:01:48,462] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 13:04:12 policy-pap | ssl.keystore.key = null 13:04:12 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.796992663Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=2.893852ms 13:04:12 kafka | [2024-05-02 13:01:48,466] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 13:04:12 policy-pap | ssl.keystore.location = null 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.8001307Z level=info msg="Executing migration" id="Add index for epoch_end" 13:04:12 kafka | [2024-05-02 13:01:48,474] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 13:04:12 policy-pap | ssl.keystore.password = null 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.800736261Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=605.461µs 13:04:12 kafka | [2024-05-02 13:01:48,474] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 13:04:12 policy-pap | ssl.keystore.type = JKS 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.804928657Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 13:04:12 kafka | [2024-05-02 13:01:48,474] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 13:04:12 policy-pap | ssl.protocol = TLSv1.3 13:04:12 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.805047909Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=120.442µs 13:04:12 kafka | [2024-05-02 13:01:48,475] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.807226349Z level=info msg="Executing migration" id="Move region to single row" 13:04:12 kafka | [2024-05-02 13:01:48,476] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 13:04:12 policy-pap | ssl.provider = null 13:04:12 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.807519554Z level=info msg="Migration successfully executed" id="Move region to single row" duration=292.775µs 13:04:12 kafka | [2024-05-02 13:01:48,497] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 13:04:12 policy-pap | ssl.secure.random.implementation = null 13:04:12 kafka | [2024-05-02 13:01:48,523] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.813698886Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 13:04:12 policy-pap | ssl.trustmanager.algorithm = PKIX 13:04:12 kafka | [2024-05-02 13:01:48,549] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.814955759Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.257333ms 13:04:12 policy-pap | ssl.truststore.certificates = null 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.820771154Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 13:04:12 kafka | [2024-05-02 13:01:48,601] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 13:04:12 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.822026737Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.255703ms 13:04:12 policy-pap | ssl.truststore.location = null 13:04:12 kafka | [2024-05-02 13:01:53,499] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.825271786Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 13:04:12 kafka | [2024-05-02 13:01:53,500] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.826191582Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=919.706µs 13:04:12 policy-pap | ssl.truststore.password = null 13:04:12 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 13:04:12 kafka | [2024-05-02 13:02:15,258] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.829516073Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 13:04:12 policy-pap | ssl.truststore.type = JKS 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:02:15,259] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.830438739Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=922.246µs 13:04:12 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:02:15,262] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.835300067Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 13:04:12 policy-pap | 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:02:15,266] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.836350107Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.046729ms 13:04:12 policy-pap | [2024-05-02T13:02:14.736+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 13:04:12 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 13:04:12 kafka | [2024-05-02 13:02:15,301] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(5RKvEIxNQGa16PmnlTf3Lw),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.839977902Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 13:04:12 policy-pap | [2024-05-02T13:02:14.736+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:02:15,302] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.840838748Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=857.346µs 13:04:12 policy-pap | [2024-05-02T13:02:14.736+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714654934736 13:04:12 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 13:04:12 kafka | [2024-05-02 13:02:15,305] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.84429326Z level=info msg="Executing migration" id="Increase tags column to length 4096" 13:04:12 policy-pap | [2024-05-02T13:02:14.736+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3, groupId=ad46f4cb-cb07-4411-8d0e-379eef1836ce] Subscribed to topic(s): policy-pdp-pap 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:02:15,305] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.844391412Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=98.622µs 13:04:12 policy-pap | [2024-05-02T13:02:14.737+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:02:15,309] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.854623138Z level=info msg="Executing migration" id="create test_data table" 13:04:12 policy-pap | [2024-05-02T13:02:14.737+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=2a10f283-dfd3-4508-92be-aa54e477288d, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@40105b39 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:02:15,309] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.855977462Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.354964ms 13:04:12 policy-pap | [2024-05-02T13:02:14.737+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=2a10f283-dfd3-4508-92be-aa54e477288d, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 13:04:12 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 13:04:12 kafka | [2024-05-02 13:02:15,356] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.86138007Z level=info msg="Executing migration" id="create dashboard_version table v1" 13:04:12 policy-pap | [2024-05-02T13:02:14.737+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:02:15,360] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.862652273Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.269973ms 13:04:12 policy-pap | allow.auto.create.topics = true 13:04:12 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 13:04:12 kafka | [2024-05-02 13:02:15,365] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.867040523Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 13:04:12 policy-pap | auto.commit.interval.ms = 5000 13:04:12 policy-db-migrator | -------------- 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.868513869Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.473226ms 13:04:12 policy-db-migrator | 13:04:12 policy-pap | auto.include.jmx.reporter = true 13:04:12 kafka | [2024-05-02 13:02:15,369] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.873294896Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 13:04:12 policy-db-migrator | 13:04:12 policy-pap | auto.offset.reset = latest 13:04:12 kafka | [2024-05-02 13:02:15,370] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 13:04:12 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.874857694Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.561608ms 13:04:12 policy-pap | bootstrap.servers = [kafka:9092] 13:04:12 kafka | [2024-05-02 13:02:15,370] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | check.crcs = true 13:04:12 kafka | [2024-05-02 13:02:15,380] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.878728214Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 13:04:12 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.878935988Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=209.754µs 13:04:12 kafka | [2024-05-02 13:02:15,384] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(4WMw632vQDSuZYp6c_DPsA),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 13:04:12 policy-pap | client.dns.lookup = use_all_dns_ips 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:02:15,386] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 13:04:12 policy-pap | client.id = consumer-policy-pap-4 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.882125306Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:02:15,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 policy-pap | client.rack = 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.882463382Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=337.796µs 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:02:15,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 policy-pap | connections.max.idle.ms = 540000 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.885807733Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 13:04:12 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 13:04:12 kafka | [2024-05-02 13:02:15,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 policy-pap | default.api.timeout.ms = 60000 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.885904734Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=96.351µs 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:02:15,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 policy-pap | enable.auto.commit = true 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.893299508Z level=info msg="Executing migration" id="create team table" 13:04:12 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 13:04:12 kafka | [2024-05-02 13:02:15,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 policy-pap | exclude.internal.topics = true 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.894411169Z level=info msg="Migration successfully executed" id="create team table" duration=1.11265ms 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:02:15,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 policy-pap | fetch.max.bytes = 52428800 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.900149953Z level=info msg="Executing migration" id="add index team.org_id" 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:02:15,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 policy-pap | fetch.max.wait.ms = 500 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.90166144Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.514648ms 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:02:15,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 policy-pap | fetch.min.bytes = 1 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.905126323Z level=info msg="Executing migration" id="add unique index team_org_id_name" 13:04:12 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 13:04:12 kafka | [2024-05-02 13:02:15,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 policy-pap | group.id = policy-pap 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.906274513Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.14814ms 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:02:15,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 policy-pap | group.instance.id = null 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.910223505Z level=info msg="Executing migration" id="Add column uid in team" 13:04:12 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 13:04:12 kafka | [2024-05-02 13:02:15,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 policy-pap | heartbeat.interval.ms = 3000 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.914715376Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.491531ms 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:02:15,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 policy-pap | interceptor.classes = [] 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.918186579Z level=info msg="Executing migration" id="Update uid column values in team" 13:04:12 policy-db-migrator | 13:04:12 policy-pap | internal.leave.group.on.close = true 13:04:12 kafka | [2024-05-02 13:02:15,387] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.918356252Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=169.323µs 13:04:12 policy-db-migrator | 13:04:12 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 13:04:12 kafka | [2024-05-02 13:02:15,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.921567991Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 13:04:12 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 13:04:12 policy-pap | isolation.level = read_uncommitted 13:04:12 kafka | [2024-05-02 13:02:15,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.923145739Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.578258ms 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:04:12 kafka | [2024-05-02 13:02:15,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.927680441Z level=info msg="Executing migration" id="create team member table" 13:04:12 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 13:04:12 policy-pap | max.partition.fetch.bytes = 1048576 13:04:12 kafka | [2024-05-02 13:02:15,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.928968055Z level=info msg="Migration successfully executed" id="create team member table" duration=1.288194ms 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | max.poll.interval.ms = 300000 13:04:12 kafka | [2024-05-02 13:02:15,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.933339514Z level=info msg="Executing migration" id="add index team_member.org_id" 13:04:12 policy-db-migrator | 13:04:12 policy-pap | max.poll.records = 500 13:04:12 kafka | [2024-05-02 13:02:15,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.934699789Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.359614ms 13:04:12 policy-db-migrator | 13:04:12 policy-pap | metadata.max.age.ms = 300000 13:04:12 kafka | [2024-05-02 13:02:15,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.938760112Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 13:04:12 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 13:04:12 policy-pap | metric.reporters = [] 13:04:12 kafka | [2024-05-02 13:02:15,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.939673809Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=912.847µs 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | metrics.num.samples = 2 13:04:12 kafka | [2024-05-02 13:02:15,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.943831104Z level=info msg="Executing migration" id="add index team_member.team_id" 13:04:12 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 13:04:12 policy-pap | metrics.recording.level = INFO 13:04:12 kafka | [2024-05-02 13:02:15,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.94470195Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=870.716µs 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | metrics.sample.window.ms = 30000 13:04:12 kafka | [2024-05-02 13:02:15,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.94804572Z level=info msg="Executing migration" id="Add column email to team table" 13:04:12 policy-db-migrator | 13:04:12 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 13:04:12 kafka | [2024-05-02 13:02:15,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.955392893Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=7.343673ms 13:04:12 policy-db-migrator | 13:04:12 policy-pap | receive.buffer.bytes = 65536 13:04:12 kafka | [2024-05-02 13:02:15,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.958674523Z level=info msg="Executing migration" id="Add column external to team_member table" 13:04:12 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 13:04:12 policy-pap | reconnect.backoff.max.ms = 1000 13:04:12 kafka | [2024-05-02 13:02:15,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.963170754Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.495671ms 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | reconnect.backoff.ms = 50 13:04:12 kafka | [2024-05-02 13:02:15,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.970042979Z level=info msg="Executing migration" id="Add column permission to team_member table" 13:04:12 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:04:12 policy-pap | request.timeout.ms = 30000 13:04:12 kafka | [2024-05-02 13:02:15,388] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.974562231Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.506012ms 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | retry.backoff.ms = 100 13:04:12 kafka | [2024-05-02 13:02:15,389] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.981335134Z level=info msg="Executing migration" id="create dashboard acl table" 13:04:12 policy-db-migrator | 13:04:12 policy-pap | sasl.client.callback.handler.class = null 13:04:12 kafka | [2024-05-02 13:02:15,390] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.98224613Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=910.676µs 13:04:12 policy-db-migrator | 13:04:12 policy-pap | sasl.jaas.config = null 13:04:12 kafka | [2024-05-02 13:02:15,390] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.98556213Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 13:04:12 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 13:04:12 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:04:12 kafka | [2024-05-02 13:02:15,390] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.986966256Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.403636ms 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 13:04:12 kafka | [2024-05-02 13:02:15,390] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.991319584Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 13:04:12 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:04:12 policy-pap | sasl.kerberos.service.name = null 13:04:12 kafka | [2024-05-02 13:02:15,391] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.992821062Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.501068ms 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 13:04:12 kafka | [2024-05-02 13:02:15,392] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.99769024Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 13:04:12 policy-db-migrator | 13:04:12 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 13:04:12 kafka | [2024-05-02 13:02:15,392] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:44.998600996Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=906.896µs 13:04:12 kafka | [2024-05-02 13:02:15,392] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.00268412Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 13:04:12 policy-db-migrator | 13:04:12 policy-pap | sasl.login.callback.handler.class = null 13:04:12 kafka | [2024-05-02 13:02:15,392] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.004136477Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.451936ms 13:04:12 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 13:04:12 policy-pap | sasl.login.class = null 13:04:12 kafka | [2024-05-02 13:02:15,392] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.008614887Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | sasl.login.connect.timeout.ms = null 13:04:12 kafka | [2024-05-02 13:02:15,392] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.009968702Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.352874ms 13:04:12 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:04:12 policy-pap | sasl.login.read.timeout.ms = null 13:04:12 kafka | [2024-05-02 13:02:15,392] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.013345352Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | sasl.login.refresh.buffer.seconds = 300 13:04:12 kafka | [2024-05-02 13:02:15,392] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.014286529Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=940.717µs 13:04:12 policy-db-migrator | 13:04:12 policy-pap | sasl.login.refresh.min.period.seconds = 60 13:04:12 kafka | [2024-05-02 13:02:15,392] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.019099796Z level=info msg="Executing migration" id="add index dashboard_permission" 13:04:12 policy-db-migrator | 13:04:12 policy-pap | sasl.login.refresh.window.factor = 0.8 13:04:12 kafka | [2024-05-02 13:02:15,392] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.020144245Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.037869ms 13:04:12 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 13:04:12 policy-pap | sasl.login.refresh.window.jitter = 0.05 13:04:12 kafka | [2024-05-02 13:02:15,392] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.026994098Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | sasl.login.retry.backoff.max.ms = 10000 13:04:12 kafka | [2024-05-02 13:02:15,392] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.027731021Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=737.183µs 13:04:12 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:04:12 policy-pap | sasl.login.retry.backoff.ms = 100 13:04:12 kafka | [2024-05-02 13:02:15,393] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.066907736Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 13:04:12 policy-db-migrator | -------------- 13:04:12 policy-pap | sasl.mechanism = GSSAPI 13:04:12 kafka | [2024-05-02 13:02:15,393] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.067370555Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=464.009µs 13:04:12 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 13:04:12 kafka | [2024-05-02 13:02:15,393] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 policy-db-migrator | 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.07324207Z level=info msg="Executing migration" id="create tag table" 13:04:12 policy-pap | sasl.oauthbearer.expected.audience = null 13:04:12 kafka | [2024-05-02 13:02:15,394] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:04:12 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.074472793Z level=info msg="Migration successfully executed" id="create tag table" duration=1.229423ms 13:04:12 policy-pap | sasl.oauthbearer.expected.issuer = null 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:02:15,395] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.078072447Z level=info msg="Executing migration" id="add index tag.key_value" 13:04:12 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:04:12 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:04:12 kafka | [2024-05-02 13:02:15,396] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.079106856Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.033699ms 13:04:12 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:02:15,396] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.085231276Z level=info msg="Executing migration" id="create login attempt table" 13:04:12 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:02:15,396] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.085953129Z level=info msg="Migration successfully executed" id="create login attempt table" duration=721.803µs 13:04:12 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:02:15,396] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.090903648Z level=info msg="Executing migration" id="add index login_attempt.username" 13:04:12 policy-pap | sasl.oauthbearer.scope.claim.name = scope 13:04:12 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.092168581Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.264373ms 13:04:12 policy-pap | sasl.oauthbearer.sub.claim.name = sub 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.097251993Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 13:04:12 policy-pap | sasl.oauthbearer.token.endpoint.url = null 13:04:12 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.099466763Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=2.21914ms 13:04:12 policy-pap | security.protocol = PLAINTEXT 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.102914175Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 13:04:12 policy-pap | security.providers = null 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.118495555Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=15.57897ms 13:04:12 policy-pap | send.buffer.bytes = 131072 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.121739123Z level=info msg="Executing migration" id="create login_attempt v2" 13:04:12 policy-pap | session.timeout.ms = 45000 13:04:12 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.122595809Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=855.866µs 13:04:12 policy-pap | socket.connection.setup.timeout.max.ms = 30000 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.127813903Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 13:04:12 policy-pap | socket.connection.setup.timeout.ms = 10000 13:04:12 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.128799601Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=985.477µs 13:04:12 policy-pap | ssl.cipher.suites = null 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.137500087Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 13:04:12 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.137818673Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=316.796µs 13:04:12 policy-pap | ssl.endpoint.identification.algorithm = https 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.140823807Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 13:04:12 policy-pap | ssl.engine.factory.class = null 13:04:12 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.141426268Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=602.971µs 13:04:12 policy-pap | ssl.key.password = null 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.146751364Z level=info msg="Executing migration" id="create user auth table" 13:04:12 policy-pap | ssl.keymanager.algorithm = SunX509 13:04:12 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.147593459Z level=info msg="Migration successfully executed" id="create user auth table" duration=844.355µs 13:04:12 policy-pap | ssl.keystore.certificate.chain = null 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.150739005Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 13:04:12 policy-pap | ssl.keystore.key = null 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.151716763Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=977.328µs 13:04:12 policy-pap | ssl.keystore.location = null 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.154773418Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 13:04:12 policy-pap | ssl.keystore.password = null 13:04:12 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.15490358Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=130.562µs 13:04:12 policy-pap | ssl.keystore.type = JKS 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.160374339Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 13:04:12 policy-pap | ssl.protocol = TLSv1.3 13:04:12 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.169079176Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=8.705397ms 13:04:12 policy-pap | ssl.provider = null 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.173334962Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 13:04:12 policy-pap | ssl.secure.random.implementation = null 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.17879299Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.457258ms 13:04:12 policy-pap | ssl.trustmanager.algorithm = PKIX 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.181766174Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 13:04:12 policy-pap | ssl.truststore.certificates = null 13:04:12 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.187267963Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.500359ms 13:04:12 policy-pap | ssl.truststore.location = null 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.190028783Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 13:04:12 policy-pap | ssl.truststore.password = null 13:04:12 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.195924249Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.894586ms 13:04:12 policy-pap | ssl.truststore.type = JKS 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.201851465Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 13:04:12 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.202859684Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.008149ms 13:04:12 policy-pap | 13:04:12 policy-db-migrator | 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 policy-pap | [2024-05-02T13:02:14.742+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 13:04:12 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.205835447Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 13:04:12 policy-pap | [2024-05-02T13:02:14.742+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 13:04:12 policy-db-migrator | -------------- 13:04:12 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.213524326Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=7.688069ms 13:04:12 grafana | logger=migrator t=2024-05-02T13:01:45.216892556Z level=info msg="Executing migration" id="create server_lock table" 13:04:13 policy-pap | [2024-05-02T13:02:14.743+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714654934742 13:04:13 kafka | [2024-05-02 13:02:15,397] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.217520078Z level=info msg="Migration successfully executed" id="create server_lock table" duration=624.961µs 13:04:13 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 13:04:13 policy-pap | [2024-05-02T13:02:14.743+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 13:04:13 kafka | [2024-05-02 13:02:15,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.222441186Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | [2024-05-02T13:02:14.743+00:00|INFO|ServiceManager|main] Policy PAP starting topics 13:04:13 kafka | [2024-05-02 13:02:15,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.22318591Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=744.313µs 13:04:13 policy-db-migrator | 13:04:13 kafka | [2024-05-02 13:02:15,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.226189084Z level=info msg="Executing migration" id="create user auth token table" 13:04:13 policy-pap | [2024-05-02T13:02:14.743+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=2a10f283-dfd3-4508-92be-aa54e477288d, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 13:04:13 policy-db-migrator | 13:04:13 kafka | [2024-05-02 13:02:15,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.2271282Z level=info msg="Migration successfully executed" id="create user auth token table" duration=939.136µs 13:04:13 policy-pap | [2024-05-02T13:02:14.743+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=ad46f4cb-cb07-4411-8d0e-379eef1836ce, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 13:04:13 policy-db-migrator | > upgrade 0100-pdp.sql 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | [2024-05-02T13:02:14.744+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=852a6574-4ab7-4022-90bb-94c752ab643e, alive=false, publisher=null]]: starting 13:04:13 kafka | [2024-05-02 13:02:15,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.233508795Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 13:04:13 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 13:04:13 policy-pap | [2024-05-02T13:02:14.763+00:00|INFO|ProducerConfig|main] ProducerConfig values: 13:04:13 kafka | [2024-05-02 13:02:15,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.235724845Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=2.2145ms 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | acks = -1 13:04:13 policy-db-migrator | 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.240674084Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 13:04:13 policy-pap | auto.include.jmx.reporter = true 13:04:13 kafka | [2024-05-02 13:02:15,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.24156749Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=893.396µs 13:04:13 policy-pap | batch.size = 16384 13:04:13 kafka | [2024-05-02 13:02:15,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 13:04:13 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.244771568Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 13:04:13 policy-pap | bootstrap.servers = [kafka:9092] 13:04:13 kafka | [2024-05-02 13:02:15,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.245766506Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=995.878µs 13:04:13 policy-pap | buffer.memory = 33554432 13:04:13 kafka | [2024-05-02 13:02:15,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 13:04:13 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.253425944Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 13:04:13 policy-pap | client.dns.lookup = use_all_dns_ips 13:04:13 kafka | [2024-05-02 13:02:15,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | client.id = producer-1 13:04:13 kafka | [2024-05-02 13:02:15,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.260090164Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=6.66081ms 13:04:13 policy-db-migrator | 13:04:13 policy-pap | compression.type = none 13:04:13 kafka | [2024-05-02 13:02:15,398] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.266199874Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 13:04:13 policy-db-migrator | 13:04:13 policy-pap | connections.max.idle.ms = 540000 13:04:13 kafka | [2024-05-02 13:02:15,398] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.267910065Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.747282ms 13:04:13 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 13:04:13 policy-pap | delivery.timeout.ms = 120000 13:04:13 kafka | [2024-05-02 13:02:15,438] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.271973058Z level=info msg="Executing migration" id="create cache_data table" 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | enable.idempotence = true 13:04:13 kafka | [2024-05-02 13:02:15,439] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.273303082Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.332654ms 13:04:13 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 13:04:13 policy-pap | interceptor.classes = [] 13:04:13 kafka | [2024-05-02 13:02:15,440] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.27712749Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 13:04:13 kafka | [2024-05-02 13:02:15,538] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.278576547Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.445916ms 13:04:13 policy-db-migrator | 13:04:13 policy-pap | linger.ms = 0 13:04:13 kafka | [2024-05-02 13:02:15,550] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.284242919Z level=info msg="Executing migration" id="create short_url table v1" 13:04:13 policy-db-migrator | 13:04:13 policy-pap | max.block.ms = 60000 13:04:13 kafka | [2024-05-02 13:02:15,552] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.285731825Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.494127ms 13:04:13 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 13:04:13 policy-pap | max.in.flight.requests.per.connection = 5 13:04:13 kafka | [2024-05-02 13:02:15,553] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.2920851Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | max.request.size = 1048576 13:04:13 kafka | [2024-05-02 13:02:15,555] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(5RKvEIxNQGa16PmnlTf3Lw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.293824691Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.743631ms 13:04:13 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 13:04:13 policy-pap | metadata.max.age.ms = 300000 13:04:13 kafka | [2024-05-02 13:02:15,566] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.302932755Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | metadata.max.idle.ms = 300000 13:04:13 kafka | [2024-05-02 13:02:15,585] INFO [Broker id=1] Finished LeaderAndIsr request in 207ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.303284531Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=356.696µs 13:04:13 policy-db-migrator | 13:04:13 policy-pap | metric.reporters = [] 13:04:13 kafka | [2024-05-02 13:02:15,590] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=5RKvEIxNQGa16PmnlTf3Lw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.307924325Z level=info msg="Executing migration" id="delete alert_definition table" 13:04:13 policy-db-migrator | 13:04:13 policy-pap | metrics.num.samples = 2 13:04:13 kafka | [2024-05-02 13:02:15,595] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.308154189Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=231.964µs 13:04:13 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 13:04:13 policy-pap | metrics.recording.level = INFO 13:04:13 kafka | [2024-05-02 13:02:15,595] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.31543689Z level=info msg="Executing migration" id="recreate alert_definition table" 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | metrics.sample.window.ms = 30000 13:04:13 kafka | [2024-05-02 13:02:15,596] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.317033879Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.596289ms 13:04:13 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 13:04:13 policy-pap | partitioner.adaptive.partitioning.enable = true 13:04:13 kafka | [2024-05-02 13:02:15,596] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.322542338Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | partitioner.availability.timeout.ms = 0 13:04:13 kafka | [2024-05-02 13:02:15,596] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.323897712Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.356034ms 13:04:13 policy-db-migrator | 13:04:13 policy-pap | partitioner.class = null 13:04:13 kafka | [2024-05-02 13:02:15,596] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.327994966Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | partitioner.ignore.keys = false 13:04:13 kafka | [2024-05-02 13:02:15,596] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.328928563Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=932.987µs 13:04:13 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 13:04:13 policy-pap | receive.buffer.bytes = 32768 13:04:13 kafka | [2024-05-02 13:02:15,596] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.333025797Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | reconnect.backoff.max.ms = 1000 13:04:13 kafka | [2024-05-02 13:02:15,596] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.33322504Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=198.503µs 13:04:13 policy-db-migrator | 13:04:13 policy-pap | reconnect.backoff.ms = 50 13:04:13 kafka | [2024-05-02 13:02:15,597] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.337041989Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 13:04:13 policy-db-migrator | 13:04:13 policy-pap | request.timeout.ms = 30000 13:04:13 kafka | [2024-05-02 13:02:15,597] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.338679818Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.644689ms 13:04:13 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 13:04:13 policy-pap | retries = 2147483647 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.344534374Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 13:04:13 kafka | [2024-05-02 13:02:15,597] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | retry.backoff.ms = 100 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.346023361Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.489857ms 13:04:13 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 13:04:13 policy-pap | sasl.client.callback.handler.class = null 13:04:13 kafka | [2024-05-02 13:02:15,597] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:04:13 policy-pap | sasl.jaas.config = null 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.349215388Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 13:04:13 policy-db-migrator | -------------- 13:04:13 kafka | [2024-05-02 13:02:15,597] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.351152113Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.936285ms 13:04:13 policy-db-migrator | 13:04:13 kafka | [2024-05-02 13:02:15,597] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.360582283Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 13:04:13 policy-db-migrator | 13:04:13 kafka | [2024-05-02 13:02:15,597] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-pap | sasl.kerberos.service.name = null 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.362160921Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.577688ms 13:04:13 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 13:04:13 kafka | [2024-05-02 13:02:15,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.367692001Z level=info msg="Executing migration" id="Add column paused in alert_definition" 13:04:13 policy-db-migrator | -------------- 13:04:13 kafka | [2024-05-02 13:02:15,598] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:04:13 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.375032973Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=7.341472ms 13:04:13 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 13:04:13 kafka | [2024-05-02 13:02:15,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-pap | sasl.login.callback.handler.class = null 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.378301232Z level=info msg="Executing migration" id="drop alert_definition table" 13:04:13 policy-db-migrator | -------------- 13:04:13 kafka | [2024-05-02 13:02:15,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-pap | sasl.login.class = null 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.379350971Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.049078ms 13:04:13 policy-db-migrator | 13:04:13 kafka | [2024-05-02 13:02:15,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-pap | sasl.login.connect.timeout.ms = null 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.385584413Z level=info msg="Executing migration" id="delete alert_definition_version table" 13:04:13 policy-db-migrator | 13:04:13 kafka | [2024-05-02 13:02:15,598] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 13:04:13 policy-pap | sasl.login.read.timeout.ms = null 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.385675074Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=91.121µs 13:04:13 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 13:04:13 kafka | [2024-05-02 13:02:15,598] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-pap | sasl.login.refresh.buffer.seconds = 300 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.388396433Z level=info msg="Executing migration" id="recreate alert_definition_version table" 13:04:13 policy-db-migrator | -------------- 13:04:13 kafka | [2024-05-02 13:02:15,599] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-pap | sasl.login.refresh.min.period.seconds = 60 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.389839539Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.444626ms 13:04:13 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 13:04:13 kafka | [2024-05-02 13:02:15,599] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-pap | sasl.login.refresh.window.factor = 0.8 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.392828923Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 13:04:13 policy-db-migrator | JOIN pdpstatistics b 13:04:13 kafka | [2024-05-02 13:02:15,599] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-pap | sasl.login.refresh.window.jitter = 0.05 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.394482633Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.65336ms 13:04:13 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 13:04:13 kafka | [2024-05-02 13:02:15,599] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-pap | sasl.login.retry.backoff.max.ms = 10000 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.402160941Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 13:04:13 policy-db-migrator | SET a.id = b.id 13:04:13 kafka | [2024-05-02 13:02:15,599] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.40319022Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.028929ms 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | sasl.login.retry.backoff.ms = 100 13:04:13 kafka | [2024-05-02 13:02:15,599] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.407191792Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 13:04:13 policy-db-migrator | 13:04:13 policy-pap | sasl.mechanism = GSSAPI 13:04:13 kafka | [2024-05-02 13:02:15,600] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.407297014Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=106.212µs 13:04:13 policy-db-migrator | 13:04:13 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 13:04:13 kafka | [2024-05-02 13:02:15,600] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.410334578Z level=info msg="Executing migration" id="drop alert_definition_version table" 13:04:13 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 13:04:13 policy-pap | sasl.oauthbearer.expected.audience = null 13:04:13 kafka | [2024-05-02 13:02:15,600] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.411814185Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.479097ms 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | sasl.oauthbearer.expected.issuer = null 13:04:13 kafka | [2024-05-02 13:02:15,600] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.417161111Z level=info msg="Executing migration" id="create alert_instance table" 13:04:13 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 13:04:13 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:04:13 kafka | [2024-05-02 13:02:15,600] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.418129579Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=968.268µs 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:04:13 kafka | [2024-05-02 13:02:15,601] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.422410556Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 13:04:13 policy-db-migrator | 13:04:13 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:04:13 kafka | [2024-05-02 13:02:15,601] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.423511375Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.100159ms 13:04:13 policy-db-migrator | 13:04:13 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.427146051Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 13:04:13 kafka | [2024-05-02 13:02:15,601] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 13:04:13 policy-pap | sasl.oauthbearer.scope.claim.name = scope 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.428811651Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.6652ms 13:04:13 kafka | [2024-05-02 13:02:15,601] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | sasl.oauthbearer.sub.claim.name = sub 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.434408742Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 13:04:13 kafka | [2024-05-02 13:02:15,601] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 13:04:13 policy-pap | sasl.oauthbearer.token.endpoint.url = null 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.44379239Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=9.383978ms 13:04:13 kafka | [2024-05-02 13:02:15,602] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | security.protocol = PLAINTEXT 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.483633898Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 13:04:13 kafka | [2024-05-02 13:02:15,602] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 policy-pap | security.providers = null 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.485330448Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.69678ms 13:04:13 kafka | [2024-05-02 13:02:15,602] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 policy-pap | send.buffer.bytes = 131072 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.488458134Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 13:04:13 kafka | [2024-05-02 13:02:15,602] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 13:04:13 policy-pap | socket.connection.setup.timeout.max.ms = 30000 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.489804599Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.345525ms 13:04:13 kafka | [2024-05-02 13:02:15,603] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | socket.connection.setup.timeout.ms = 10000 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.494116426Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 13:04:13 kafka | [2024-05-02 13:02:15,603] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 13:04:13 policy-pap | ssl.cipher.suites = null 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.520469381Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=26.349755ms 13:04:13 kafka | [2024-05-02 13:02:15,603] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.526902706Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 13:04:13 kafka | [2024-05-02 13:02:15,603] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 policy-pap | ssl.endpoint.identification.algorithm = https 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.554003014Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=27.099478ms 13:04:13 kafka | [2024-05-02 13:02:15,603] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 policy-pap | ssl.engine.factory.class = null 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.560774616Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 13:04:13 kafka | [2024-05-02 13:02:15,604] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-db-migrator | > upgrade 0210-sequence.sql 13:04:13 policy-pap | ssl.key.password = null 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.561794645Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.018968ms 13:04:13 kafka | [2024-05-02 13:02:15,604] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | ssl.keymanager.algorithm = SunX509 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.570351199Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 13:04:13 kafka | [2024-05-02 13:02:15,604] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 13:04:13 policy-pap | ssl.keystore.certificate.chain = null 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.571940207Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.588728ms 13:04:13 kafka | [2024-05-02 13:02:15,604] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | ssl.keystore.key = null 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.576910597Z level=info msg="Executing migration" id="add current_reason column related to current_state" 13:04:13 kafka | [2024-05-02 13:02:15,604] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 policy-pap | ssl.keystore.location = null 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.582485997Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.57438ms 13:04:13 kafka | [2024-05-02 13:02:15,605] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 policy-pap | ssl.keystore.password = null 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.587216422Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 13:04:13 kafka | [2024-05-02 13:02:15,605] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 13:04:13 policy-db-migrator | > upgrade 0220-sequence.sql 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.592825183Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.611421ms 13:04:13 kafka | [2024-05-02 13:02:15,605] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | ssl.keystore.type = JKS 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.595830497Z level=info msg="Executing migration" id="create alert_rule table" 13:04:13 kafka | [2024-05-02 13:02:15,605] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 13:04:13 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 13:04:13 policy-pap | ssl.protocol = TLSv1.3 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.596818255Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=986.918µs 13:04:13 kafka | [2024-05-02 13:02:15,605] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | ssl.provider = null 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.601701573Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 13:04:13 kafka | [2024-05-02 13:02:15,606] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 policy-pap | ssl.secure.random.implementation = null 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.60265908Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=957.057µs 13:04:13 kafka | [2024-05-02 13:02:15,606] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 policy-pap | ssl.trustmanager.algorithm = PKIX 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.607932125Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 13:04:13 kafka | [2024-05-02 13:02:15,606] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 13:04:13 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 13:04:13 policy-pap | ssl.truststore.certificates = null 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.608898302Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=966.437µs 13:04:13 kafka | [2024-05-02 13:02:15,606] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | ssl.truststore.location = null 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.612169101Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 13:04:13 kafka | [2024-05-02 13:02:15,607] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 13:04:13 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 13:04:13 policy-pap | ssl.truststore.password = null 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.613942783Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.769192ms 13:04:13 kafka | [2024-05-02 13:02:15,607] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | ssl.truststore.type = JKS 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.617242093Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 13:04:13 kafka | [2024-05-02 13:02:15,607] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 policy-pap | transaction.timeout.ms = 60000 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.617305884Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=64.661µs 13:04:13 kafka | [2024-05-02 13:02:15,607] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 policy-pap | transactional.id = null 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.622013959Z level=info msg="Executing migration" id="add column for to alert_rule" 13:04:13 kafka | [2024-05-02 13:02:15,607] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 13:04:13 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.627931775Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=5.916807ms 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 13:04:13 kafka | [2024-05-02 13:02:15,607] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.63098357Z level=info msg="Executing migration" id="add column annotations to alert_rule" 13:04:13 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 13:04:13 policy-pap | 13:04:13 kafka | [2024-05-02 13:02:15,608] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.636832265Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=5.845785ms 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | [2024-05-02T13:02:14.774+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 13:04:13 kafka | [2024-05-02 13:02:15,608] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.642796983Z level=info msg="Executing migration" id="add column labels to alert_rule" 13:04:13 policy-db-migrator | 13:04:13 policy-pap | [2024-05-02T13:02:14.789+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 13:04:13 kafka | [2024-05-02 13:02:15,608] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.651872336Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=9.081104ms 13:04:13 policy-db-migrator | 13:04:13 policy-pap | [2024-05-02T13:02:14.789+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 13:04:13 kafka | [2024-05-02 13:02:15,608] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.657322884Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 13:04:13 policy-db-migrator | > upgrade 0120-toscatrigger.sql 13:04:13 policy-pap | [2024-05-02T13:02:14.789+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714654934789 13:04:13 kafka | [2024-05-02 13:02:15,608] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.658707539Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.384545ms 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | [2024-05-02T13:02:14.789+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=852a6574-4ab7-4022-90bb-94c752ab643e, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 13:04:13 kafka | [2024-05-02 13:02:15,608] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.66266227Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 13:04:13 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 13:04:13 policy-pap | [2024-05-02T13:02:14.789+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=20050383-eb61-4137-8107-8e374c8ef610, alive=false, publisher=null]]: starting 13:04:13 kafka | [2024-05-02 13:02:15,608] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.664780498Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=2.117648ms 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | [2024-05-02T13:02:14.790+00:00|INFO|ProducerConfig|main] ProducerConfig values: 13:04:13 kafka | [2024-05-02 13:02:15,608] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.668657608Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 13:04:13 policy-db-migrator | 13:04:13 policy-pap | acks = -1 13:04:13 kafka | [2024-05-02 13:02:15,609] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.678053287Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=9.391169ms 13:04:13 policy-db-migrator | 13:04:13 policy-pap | auto.include.jmx.reporter = true 13:04:13 kafka | [2024-05-02 13:02:15,609] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.688496585Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 13:04:13 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 13:04:13 policy-pap | batch.size = 16384 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.699168987Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=10.670182ms 13:04:13 kafka | [2024-05-02 13:02:15,609] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | bootstrap.servers = [kafka:9092] 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.704165957Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 13:04:13 kafka | [2024-05-02 13:02:15,609] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 13:04:13 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 13:04:13 policy-pap | buffer.memory = 33554432 13:04:13 kafka | [2024-05-02 13:02:15,609] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.705026453Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=860.266µs 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | client.dns.lookup = use_all_dns_ips 13:04:13 kafka | [2024-05-02 13:02:15,609] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.707984936Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 13:04:13 policy-db-migrator | 13:04:13 policy-pap | client.id = producer-2 13:04:13 kafka | [2024-05-02 13:02:15,609] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.714173987Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.187721ms 13:04:13 policy-db-migrator | 13:04:13 policy-pap | compression.type = none 13:04:13 kafka | [2024-05-02 13:02:15,610] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.72096026Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 13:04:13 policy-db-migrator | > upgrade 0140-toscaparameter.sql 13:04:13 policy-pap | connections.max.idle.ms = 540000 13:04:13 kafka | [2024-05-02 13:02:15,610] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.727439436Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.478316ms 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | delivery.timeout.ms = 120000 13:04:13 kafka | [2024-05-02 13:02:15,610] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.731787804Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 13:04:13 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 13:04:13 policy-pap | enable.idempotence = true 13:04:13 kafka | [2024-05-02 13:02:15,610] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.732003368Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=215.554µs 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | interceptor.classes = [] 13:04:13 kafka | [2024-05-02 13:02:15,610] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.735275427Z level=info msg="Executing migration" id="create alert_rule_version table" 13:04:13 policy-db-migrator | 13:04:13 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 13:04:13 kafka | [2024-05-02 13:02:15,610] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.736922167Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.6459ms 13:04:13 policy-db-migrator | 13:04:13 policy-pap | linger.ms = 0 13:04:13 kafka | [2024-05-02 13:02:15,610] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.741395267Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 13:04:13 policy-db-migrator | > upgrade 0150-toscaproperty.sql 13:04:13 policy-pap | max.block.ms = 60000 13:04:13 kafka | [2024-05-02 13:02:15,611] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.742923495Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.527698ms 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | max.in.flight.requests.per.connection = 5 13:04:13 kafka | [2024-05-02 13:02:15,611] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.74654083Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 13:04:13 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 13:04:13 policy-pap | max.request.size = 1048576 13:04:13 kafka | [2024-05-02 13:02:15,611] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.747956326Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.414645ms 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | metadata.max.age.ms = 300000 13:04:13 kafka | [2024-05-02 13:02:15,611] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.751082422Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 13:04:13 policy-db-migrator | 13:04:13 policy-pap | metadata.max.idle.ms = 300000 13:04:13 kafka | [2024-05-02 13:02:15,611] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.751291306Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=209.684µs 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | metric.reporters = [] 13:04:13 kafka | [2024-05-02 13:02:15,611] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.755343348Z level=info msg="Executing migration" id="add column for to alert_rule_version" 13:04:13 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 13:04:13 policy-pap | metrics.num.samples = 2 13:04:13 kafka | [2024-05-02 13:02:15,611] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.761907627Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.563609ms 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | metrics.recording.level = INFO 13:04:13 kafka | [2024-05-02 13:02:15,612] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.766357657Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 13:04:13 policy-db-migrator | 13:04:13 policy-pap | metrics.sample.window.ms = 30000 13:04:13 kafka | [2024-05-02 13:02:15,612] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.771106852Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.748685ms 13:04:13 policy-pap | partitioner.adaptive.partitioning.enable = true 13:04:13 kafka | [2024-05-02 13:02:15,612] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.774431382Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 13:04:13 policy-pap | partitioner.availability.timeout.ms = 0 13:04:13 kafka | [2024-05-02 13:02:15,612] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 13:04:13 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.781899777Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=7.467364ms 13:04:13 policy-pap | partitioner.class = null 13:04:13 kafka | [2024-05-02 13:02:15,612] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.785388969Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 13:04:13 policy-pap | partitioner.ignore.keys = false 13:04:13 kafka | [2024-05-02 13:02:15,612] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.791954417Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.564598ms 13:04:13 policy-pap | receive.buffer.bytes = 32768 13:04:13 kafka | [2024-05-02 13:02:15,612] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.796461019Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 13:04:13 policy-pap | reconnect.backoff.max.ms = 1000 13:04:13 kafka | [2024-05-02 13:02:15,613] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) 13:04:13 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.802713491Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.249192ms 13:04:13 policy-pap | reconnect.backoff.ms = 50 13:04:13 kafka | [2024-05-02 13:02:15,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.808112518Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 13:04:13 policy-pap | request.timeout.ms = 30000 13:04:13 kafka | [2024-05-02 13:02:15,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.808304682Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=192.594µs 13:04:13 policy-pap | retries = 2147483647 13:04:13 kafka | [2024-05-02 13:02:15,615] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.810875928Z level=info msg="Executing migration" id=create_alert_configuration_table 13:04:13 policy-pap | retry.backoff.ms = 100 13:04:13 kafka | [2024-05-02 13:02:15,615] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.811927457Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.051279ms 13:04:13 policy-pap | sasl.client.callback.handler.class = null 13:04:13 kafka | [2024-05-02 13:02:15,615] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.816307016Z level=info msg="Executing migration" id="Add column default in alert_configuration" 13:04:13 policy-pap | sasl.jaas.config = null 13:04:13 kafka | [2024-05-02 13:02:15,615] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 13:04:13 kafka | [2024-05-02 13:02:15,615] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.823127539Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.819893ms 13:04:13 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:04:13 kafka | [2024-05-02 13:02:15,615] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.827695661Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 13:04:13 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 13:04:13 kafka | [2024-05-02 13:02:15,615] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.827895654Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=217.984µs 13:04:13 policy-pap | sasl.kerberos.service.name = null 13:04:13 kafka | [2024-05-02 13:02:15,615] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.831248805Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 13:04:13 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 13:04:13 kafka | [2024-05-02 13:02:15,615] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.838082508Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.830143ms 13:04:13 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 13:04:13 kafka | [2024-05-02 13:02:15,616] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.84430961Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 13:04:13 policy-pap | sasl.login.callback.handler.class = null 13:04:13 kafka | [2024-05-02 13:02:15,616] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.84541655Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.10664ms 13:04:13 policy-pap | sasl.login.class = null 13:04:13 kafka | [2024-05-02 13:02:15,616] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.854417882Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 13:04:13 policy-pap | sasl.login.connect.timeout.ms = null 13:04:13 kafka | [2024-05-02 13:02:15,616] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.864701867Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=10.284935ms 13:04:13 policy-pap | sasl.login.read.timeout.ms = null 13:04:13 kafka | [2024-05-02 13:02:15,616] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.898521246Z level=info msg="Executing migration" id=create_ngalert_configuration_table 13:04:13 policy-pap | sasl.login.refresh.buffer.seconds = 300 13:04:13 kafka | [2024-05-02 13:02:15,616] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.900044503Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=1.522657ms 13:04:13 policy-pap | sasl.login.refresh.min.period.seconds = 60 13:04:13 kafka | [2024-05-02 13:02:15,616] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.904636786Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 13:04:13 policy-pap | sasl.login.refresh.window.factor = 0.8 13:04:13 kafka | [2024-05-02 13:02:15,616] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.905746856Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.11019ms 13:04:13 policy-pap | sasl.login.refresh.window.jitter = 0.05 13:04:13 kafka | [2024-05-02 13:02:15,619] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) 13:04:13 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.909234299Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 13:04:13 policy-pap | sasl.login.retry.backoff.max.ms = 10000 13:04:13 kafka | [2024-05-02 13:02:15,620] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.916021451Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.786382ms 13:04:13 policy-pap | sasl.login.retry.backoff.ms = 100 13:04:13 kafka | [2024-05-02 13:02:15,621] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.924241289Z level=info msg="Executing migration" id="create provenance_type table" 13:04:13 policy-pap | sasl.mechanism = GSSAPI 13:04:13 kafka | [2024-05-02 13:02:15,621] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.925295398Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=1.050799ms 13:04:13 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 13:04:13 kafka | [2024-05-02 13:02:15,621] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.931015101Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 13:04:13 policy-pap | sasl.oauthbearer.expected.audience = null 13:04:13 kafka | [2024-05-02 13:02:15,621] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 policy-pap | sasl.oauthbearer.expected.issuer = null 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.932815263Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.799982ms 13:04:13 policy-db-migrator | > upgrade 0100-upgrade.sql 13:04:13 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:04:13 kafka | [2024-05-02 13:02:15,621] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.936981828Z level=info msg="Executing migration" id="create alert_image table" 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:04:13 kafka | [2024-05-02 13:02:15,621] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.939711077Z level=info msg="Migration successfully executed" id="create alert_image table" duration=2.735229ms 13:04:13 policy-db-migrator | select 'upgrade to 1100 completed' as msg 13:04:13 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:04:13 kafka | [2024-05-02 13:02:15,621] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.944273069Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 13:04:13 kafka | [2024-05-02 13:02:15,621] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.94542479Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.151801ms 13:04:13 policy-db-migrator | 13:04:13 policy-pap | sasl.oauthbearer.scope.claim.name = scope 13:04:13 kafka | [2024-05-02 13:02:15,621] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.948904313Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 13:04:13 policy-db-migrator | msg 13:04:13 policy-pap | sasl.oauthbearer.sub.claim.name = sub 13:04:13 kafka | [2024-05-02 13:02:15,621] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.949062625Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=157.092µs 13:04:13 policy-db-migrator | upgrade to 1100 completed 13:04:13 policy-pap | sasl.oauthbearer.token.endpoint.url = null 13:04:13 kafka | [2024-05-02 13:02:15,621] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.953471275Z level=info msg="Executing migration" id=create_alert_configuration_history_table 13:04:13 policy-db-migrator | 13:04:13 policy-pap | security.protocol = PLAINTEXT 13:04:13 kafka | [2024-05-02 13:02:15,621] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.954563314Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.091449ms 13:04:13 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 13:04:13 policy-pap | security.providers = null 13:04:13 kafka | [2024-05-02 13:02:15,621] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.961446518Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | send.buffer.bytes = 131072 13:04:13 kafka | [2024-05-02 13:02:15,621] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.963133999Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.687511ms 13:04:13 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 13:04:13 policy-pap | socket.connection.setup.timeout.max.ms = 30000 13:04:13 kafka | [2024-05-02 13:02:15,621] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.96877293Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | socket.connection.setup.timeout.ms = 10000 13:04:13 kafka | [2024-05-02 13:02:15,621] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.96928612Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 13:04:13 policy-db-migrator | 13:04:13 policy-pap | ssl.cipher.suites = null 13:04:13 kafka | [2024-05-02 13:02:15,621] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.972499767Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 13:04:13 policy-db-migrator | 13:04:13 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:04:13 kafka | [2024-05-02 13:02:15,621] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.973063948Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=563.5µs 13:04:13 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 13:04:13 policy-pap | ssl.endpoint.identification.algorithm = https 13:04:13 kafka | [2024-05-02 13:02:15,621] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.976621192Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | ssl.engine.factory.class = null 13:04:13 kafka | [2024-05-02 13:02:15,621] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.977796503Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.175211ms 13:04:13 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 13:04:13 policy-pap | ssl.key.password = null 13:04:13 kafka | [2024-05-02 13:02:15,621] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.983485955Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | ssl.keymanager.algorithm = SunX509 13:04:13 kafka | [2024-05-02 13:02:15,621] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.991633302Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=8.148507ms 13:04:13 policy-db-migrator | 13:04:13 policy-pap | ssl.keystore.certificate.chain = null 13:04:13 kafka | [2024-05-02 13:02:15,621] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.997298784Z level=info msg="Executing migration" id="create library_element table v1" 13:04:13 policy-pap | ssl.keystore.key = null 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:45.998585367Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.286493ms 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | ssl.keystore.location = null 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.003960812Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 13:04:13 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 13:04:13 policy-pap | ssl.keystore.password = null 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.005249442Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.287821ms 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | ssl.keystore.type = JKS 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.012975291Z level=info msg="Executing migration" id="create library_element_connection table v1" 13:04:13 policy-db-migrator | 13:04:13 policy-pap | ssl.protocol = TLSv1.3 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 policy-pap | ssl.provider = null 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.014390654Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.414933ms 13:04:13 policy-db-migrator | > upgrade 0120-audit_sequence.sql 13:04:13 policy-pap | ssl.secure.random.implementation = null 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.017691329Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | ssl.trustmanager.algorithm = PKIX 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.019650321Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.958492ms 13:04:13 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 13:04:13 policy-pap | ssl.truststore.certificates = null 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.022752252Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | ssl.truststore.location = null 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.024145766Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.392373ms 13:04:13 policy-db-migrator | 13:04:13 policy-pap | ssl.truststore.password = null 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.027465541Z level=info msg="Executing migration" id="increase max description length to 2048" 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | ssl.truststore.type = JKS 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.027575402Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=109.072µs 13:04:13 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 13:04:13 policy-pap | transaction.timeout.ms = 60000 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.032142518Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | transactional.id = null 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.032333061Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=188.313µs 13:04:13 policy-db-migrator | 13:04:13 policy-pap | 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.034756951Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 13:04:13 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 13:04:13 policy-pap | [2024-05-02T13:02:14.805+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.03529435Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=542.179µs 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | [2024-05-02T13:02:14.808+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.038651596Z level=info msg="Executing migration" id="create data_keys table" 13:04:13 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 13:04:13 policy-pap | [2024-05-02T13:02:14.808+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.040469246Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.8177ms 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | [2024-05-02T13:02:14.808+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714654934808 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.047793167Z level=info msg="Executing migration" id="create secrets table" 13:04:13 policy-db-migrator | 13:04:13 policy-pap | [2024-05-02T13:02:14.808+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=20050383-eb61-4137-8107-8e374c8ef610, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 13:04:13 policy-pap | [2024-05-02T13:02:14.809+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.048853415Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.059238ms 13:04:13 policy-pap | [2024-05-02T13:02:14.809+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.055381313Z level=info msg="Executing migration" id="rename data_keys name column to id" 13:04:13 policy-pap | [2024-05-02T13:02:14.811+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 13:04:13 policy-db-migrator | -------------- 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.098611089Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=43.220926ms 13:04:13 policy-pap | [2024-05-02T13:02:14.812+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 13:04:13 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.102385232Z level=info msg="Executing migration" id="add name column into data_keys" 13:04:13 policy-pap | [2024-05-02T13:02:14.815+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 13:04:13 policy-db-migrator | -------------- 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.11129875Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=8.911708ms 13:04:13 policy-pap | [2024-05-02T13:02:14.818+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 13:04:13 policy-db-migrator | 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.115161344Z level=info msg="Executing migration" id="copy data_keys id column values into name" 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:14.819+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-db-migrator | TRUNCATE TABLE sequence 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:14.823+00:00|INFO|TimerManager|Thread-9] timer manager update started 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-db-migrator | 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:14.823+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 13:04:13 policy-db-migrator | 13:04:13 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 13:04:13 policy-pap | [2024-05-02T13:02:14.824+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.115381257Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=224.343µs 13:04:13 kafka | [2024-05-02 13:02:15,622] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | [2024-05-02T13:02:14.825+00:00|INFO|ServiceManager|main] Policy PAP started 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.11975223Z level=info msg="Executing migration" id="rename data_keys name column to label" 13:04:13 kafka | [2024-05-02 13:02:15,623] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 13:04:13 policy-pap | [2024-05-02T13:02:14.827+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.131 seconds (process running for 10.772) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.151848082Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=32.090571ms 13:04:13 kafka | [2024-05-02 13:02:15,623] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | [2024-05-02T13:02:15.236+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.155292909Z level=info msg="Executing migration" id="rename data_keys id column back to name" 13:04:13 kafka | [2024-05-02 13:02:15,623] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 policy-pap | [2024-05-02T13:02:15.236+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 241kjIVNQKeIb2Rrsc8nPA 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.186059028Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=30.762219ms 13:04:13 kafka | [2024-05-02 13:02:15,623] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | [2024-05-02T13:02:15.237+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: 241kjIVNQKeIb2Rrsc8nPA 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.190204697Z level=info msg="Executing migration" id="create kv_store table v1" 13:04:13 kafka | [2024-05-02 13:02:15,623] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | DROP TABLE pdpstatistics 13:04:13 policy-pap | [2024-05-02T13:02:15.236+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: 241kjIVNQKeIb2Rrsc8nPA 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.191102472Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=898.275µs 13:04:13 kafka | [2024-05-02 13:02:15,623] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | [2024-05-02T13:02:15.292+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3, groupId=ad46f4cb-cb07-4411-8d0e-379eef1836ce] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.1964334Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 13:04:13 kafka | [2024-05-02 13:02:15,623] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 policy-pap | [2024-05-02T13:02:15.292+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3, groupId=ad46f4cb-cb07-4411-8d0e-379eef1836ce] Cluster ID: 241kjIVNQKeIb2Rrsc8nPA 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.197881564Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.450604ms 13:04:13 kafka | [2024-05-02 13:02:15,623] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 policy-pap | [2024-05-02T13:02:15.347+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.202987099Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 13:04:13 kafka | [2024-05-02 13:02:15,623] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 13:04:13 policy-pap | [2024-05-02T13:02:15.352+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.20363017Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=647µs 13:04:13 kafka | [2024-05-02 13:02:15,623] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | [2024-05-02T13:02:15.356+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.209937424Z level=info msg="Executing migration" id="create permission table" 13:04:13 kafka | [2024-05-02 13:02:15,623] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 13:04:13 policy-pap | [2024-05-02T13:02:15.408+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3, groupId=ad46f4cb-cb07-4411-8d0e-379eef1836ce] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.212008978Z level=info msg="Migration successfully executed" id="create permission table" duration=2.075204ms 13:04:13 kafka | [2024-05-02 13:02:15,624] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | [2024-05-02T13:02:15.466+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.217656122Z level=info msg="Executing migration" id="add unique index permission.role_id" 13:04:13 kafka | [2024-05-02 13:02:15,624] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 policy-pap | [2024-05-02T13:02:15.569+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.21873251Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.076288ms 13:04:13 kafka | [2024-05-02 13:02:15,624] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.222233628Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 13:04:13 kafka | [2024-05-02 13:02:15,624] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 13:04:13 policy-pap | [2024-05-02T13:02:15.570+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3, groupId=ad46f4cb-cb07-4411-8d0e-379eef1836ce] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.224223971Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.989353ms 13:04:13 kafka | [2024-05-02 13:02:15,624] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-pap | [2024-05-02T13:02:16.312+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.229094761Z level=info msg="Executing migration" id="create role table" 13:04:13 kafka | [2024-05-02 13:02:15,624] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | DROP TABLE statistics_sequence 13:04:13 policy-pap | [2024-05-02T13:02:16.319+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.230654567Z level=info msg="Migration successfully executed" id="create role table" duration=1.557706ms 13:04:13 kafka | [2024-05-02 13:02:15,624] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | -------------- 13:04:13 policy-db-migrator | 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.235372586Z level=info msg="Executing migration" id="add column display_name" 13:04:13 kafka | [2024-05-02 13:02:15,624] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | policyadmin: OK: upgrade (1300) 13:04:13 policy-db-migrator | name version 13:04:13 kafka | [2024-05-02 13:02:15,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | policyadmin 1300 13:04:13 policy-db-migrator | ID script operation from_version to_version tag success atTime 13:04:13 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:44 13:04:13 policy-pap | [2024-05-02T13:02:16.322+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3, groupId=ad46f4cb-cb07-4411-8d0e-379eef1836ce] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 13:04:13 kafka | [2024-05-02 13:02:15,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:44 13:04:13 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:44 13:04:13 policy-pap | [2024-05-02T13:02:16.325+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3, groupId=ad46f4cb-cb07-4411-8d0e-379eef1836ce] (Re-)joining group 13:04:13 kafka | [2024-05-02 13:02:15,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:44 13:04:13 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:44 13:04:13 policy-pap | [2024-05-02T13:02:16.357+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3, groupId=ad46f4cb-cb07-4411-8d0e-379eef1836ce] Request joining group due to: need to re-join with the given member-id: consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3-bba39724-613b-4f99-a8cc-60347d325b0c 13:04:13 kafka | [2024-05-02 13:02:15,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:44 13:04:13 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:44 13:04:13 policy-pap | [2024-05-02T13:02:16.357+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3, groupId=ad46f4cb-cb07-4411-8d0e-379eef1836ce] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 13:04:13 kafka | [2024-05-02 13:02:15,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:44 13:04:13 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:44 13:04:13 policy-pap | [2024-05-02T13:02:16.357+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3, groupId=ad46f4cb-cb07-4411-8d0e-379eef1836ce] (Re-)joining group 13:04:13 kafka | [2024-05-02 13:02:15,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:44 13:04:13 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:44 13:04:13 policy-pap | [2024-05-02T13:02:16.358+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-99bb646c-54eb-4071-b10f-cc08b6bdb05d 13:04:13 kafka | [2024-05-02 13:02:15,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:44 13:04:13 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:45 13:04:13 policy-pap | [2024-05-02T13:02:16.359+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 13:04:13 kafka | [2024-05-02 13:02:15,625] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:45 13:04:13 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:45 13:04:13 policy-pap | [2024-05-02T13:02:16.359+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 13:04:13 kafka | [2024-05-02 13:02:15,627] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 13:04:13 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:45 13:04:13 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:45 13:04:13 policy-pap | [2024-05-02T13:02:19.385+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-99bb646c-54eb-4071-b10f-cc08b6bdb05d', protocol='range'} 13:04:13 kafka | [2024-05-02 13:02:15,627] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 13:04:13 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:45 13:04:13 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:45 13:04:13 policy-pap | [2024-05-02T13:02:19.392+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-99bb646c-54eb-4071-b10f-cc08b6bdb05d=Assignment(partitions=[policy-pdp-pap-0])} 13:04:13 kafka | [2024-05-02 13:02:15,641] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 13:04:13 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:45 13:04:13 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:45 13:04:13 policy-pap | [2024-05-02T13:02:19.393+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3, groupId=ad46f4cb-cb07-4411-8d0e-379eef1836ce] Successfully joined group with generation Generation{generationId=1, memberId='consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3-bba39724-613b-4f99-a8cc-60347d325b0c', protocol='range'} 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 13:04:13 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:45 13:04:13 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:45 13:04:13 policy-pap | [2024-05-02T13:02:19.393+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3, groupId=ad46f4cb-cb07-4411-8d0e-379eef1836ce] Finished assignment for group at generation 1: {consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3-bba39724-613b-4f99-a8cc-60347d325b0c=Assignment(partitions=[policy-pdp-pap-0])} 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 13:04:13 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:45 13:04:13 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:45 13:04:13 policy-pap | [2024-05-02T13:02:19.424+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3, groupId=ad46f4cb-cb07-4411-8d0e-379eef1836ce] Successfully synced group in generation Generation{generationId=1, memberId='consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3-bba39724-613b-4f99-a8cc-60347d325b0c', protocol='range'} 13:04:13 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:45 13:04:13 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:45 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:19.424+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-99bb646c-54eb-4071-b10f-cc08b6bdb05d', protocol='range'} 13:04:13 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:45 13:04:13 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:45 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:19.425+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3, groupId=ad46f4cb-cb07-4411-8d0e-379eef1836ce] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 13:04:13 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:45 13:04:13 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:45 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:19.425+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 13:04:13 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:45 13:04:13 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:45 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:19.431+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3, groupId=ad46f4cb-cb07-4411-8d0e-379eef1836ce] Adding newly assigned partitions: policy-pdp-pap-0 13:04:13 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:46 13:04:13 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:46 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:19.431+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 13:04:13 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:46 13:04:13 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:46 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:19.459+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 13:04:13 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:46 13:04:13 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:46 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:19.460+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3, groupId=ad46f4cb-cb07-4411-8d0e-379eef1836ce] Found no committed offset for partition policy-pdp-pap-0 13:04:13 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:46 13:04:13 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:46 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:19.481+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 13:04:13 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:46 13:04:13 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:46 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:19.492+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3, groupId=ad46f4cb-cb07-4411-8d0e-379eef1836ce] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 13:04:13 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:46 13:04:13 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:46 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:20.942+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.243334917Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.962142ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.250700829Z level=info msg="Executing migration" id="add column group_name" 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:20.943+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.262448794Z level=info msg="Migration successfully executed" id="add column group_name" duration=11.744385ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.266318618Z level=info msg="Executing migration" id="add index role.org_id" 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:20.946+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 3 ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.267139742Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=820.854µs 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.271439633Z level=info msg="Executing migration" id="add unique index role_org_id_name" 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:36.692+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.272275877Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=836.074µs 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.275953518Z level=info msg="Executing migration" id="add index role_org_id_uid" 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 13:04:13 policy-pap | [] 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.277953891Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.999213ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.325458308Z level=info msg="Executing migration" id="create team role table" 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:36.693+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.327410661Z level=info msg="Migration successfully executed" id="create team role table" duration=1.957492ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.333859657Z level=info msg="Executing migration" id="add index team_role.org_id" 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 13:04:13 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"9e2c55c4-d511-4033-a47b-7bb40f039690","timestampMs":1714654956647,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup"} 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.33523016Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.366383ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.339523661Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:36.693+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.340835223Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.311022ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.344591775Z level=info msg="Executing migration" id="add index team_role.team_id" 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.345763555Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.17129ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.350149797Z level=info msg="Executing migration" id="create user role table" 13:04:13 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"9e2c55c4-d511-4033-a47b-7bb40f039690","timestampMs":1714654956647,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup"} 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.351207995Z level=info msg="Migration successfully executed" id="create user role table" duration=1.053648ms 13:04:13 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:46 13:04:13 policy-pap | [2024-05-02T13:02:36.702+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.354610001Z level=info msg="Executing migration" id="add index user_role.org_id" 13:04:13 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:46 13:04:13 policy-pap | [2024-05-02T13:02:36.780+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpUpdate starting 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 13:04:13 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:46 13:04:13 policy-pap | [2024-05-02T13:02:36.780+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpUpdate starting listener 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.355842202Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.231521ms 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 13:04:13 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:46 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.359265008Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:36.780+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpUpdate starting timer 13:04:13 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:46 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.360388447Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.123199ms 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:36.781+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=f79c0c93-5a17-4ce9-a84b-7ac1595d4fe0, expireMs=1714654986781] 13:04:13 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:46 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.367978913Z level=info msg="Executing migration" id="add index user_role.user_id" 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:36.783+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=f79c0c93-5a17-4ce9-a84b-7ac1595d4fe0, expireMs=1714654986781] 13:04:13 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:46 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.369074261Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.094438ms 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:36.783+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpUpdate starting enqueue 13:04:13 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:46 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.376350641Z level=info msg="Executing migration" id="create builtin role table" 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:36.783+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpUpdate started 13:04:13 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:46 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.377271747Z level=info msg="Migration successfully executed" id="create builtin role table" duration=967.396µs 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:36.788+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 13:04:13 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:46 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.380637813Z level=info msg="Executing migration" id="add index builtin_role.role_id" 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 13:04:13 policy-pap | {"source":"pap-8314741a-bad7-42f4-9d4c-45e5809d9dbb","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f79c0c93-5a17-4ce9-a84b-7ac1595d4fe0","timestampMs":1714654956761,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:04:13 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:47 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.381770731Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.132659ms 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:36.845+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 13:04:13 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:47 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.386028462Z level=info msg="Executing migration" id="add index builtin_role.name" 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 13:04:13 policy-pap | {"source":"pap-8314741a-bad7-42f4-9d4c-45e5809d9dbb","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f79c0c93-5a17-4ce9-a84b-7ac1595d4fe0","timestampMs":1714654956761,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:04:13 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:47 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.387175161Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.146469ms 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:36.846+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 13:04:13 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:47 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.390350363Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:36.855+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:04:13 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:47 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.399553646Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=9.202253ms 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 13:04:13 policy-pap | {"source":"pap-8314741a-bad7-42f4-9d4c-45e5809d9dbb","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f79c0c93-5a17-4ce9-a84b-7ac1595d4fe0","timestampMs":1714654956761,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:04:13 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:47 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.402923822Z level=info msg="Executing migration" id="add index builtin_role.org_id" 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:36.855+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 13:04:13 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:47 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.404055641Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.131189ms 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:36.879+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:04:13 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:47 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.407501258Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 13:04:13 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"5d086639-f41e-480d-9e01-c44c133be1a9","timestampMs":1714654956857,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup"} 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 13:04:13 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:47 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.408678927Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.178019ms 13:04:13 policy-pap | [2024-05-02T13:02:36.880+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 13:04:13 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:47 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.417225459Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 13:04:13 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"5d086639-f41e-480d-9e01-c44c133be1a9","timestampMs":1714654956857,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup"} 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 13:04:13 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:47 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.418372328Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.142979ms 13:04:13 policy-pap | [2024-05-02T13:02:36.880+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 13:04:13 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:47 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.423792418Z level=info msg="Executing migration" id="add unique index role.uid" 13:04:13 policy-pap | [2024-05-02T13:02:36.889+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 13:04:13 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:47 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.424895176Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.106769ms 13:04:13 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f79c0c93-5a17-4ce9-a84b-7ac1595d4fe0","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"b5e972a4-d4dc-44d3-b1ee-eeb5e46402bb","timestampMs":1714654956858,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 13:04:13 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:47 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.429045625Z level=info msg="Executing migration" id="create seed assignment table" 13:04:13 policy-pap | [2024-05-02T13:02:36.904+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpUpdate stopping 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 13:04:13 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:47 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.429876508Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=830.363µs 13:04:13 policy-pap | [2024-05-02T13:02:36.905+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpUpdate stopping enqueue 13:04:13 kafka | [2024-05-02 13:02:15,642] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 13:04:13 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:47 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.433360136Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 13:04:13 policy-pap | [2024-05-02T13:02:36.905+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpUpdate stopping timer 13:04:13 kafka | [2024-05-02 13:02:15,643] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 13:04:13 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:47 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.434490495Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.130169ms 13:04:13 policy-pap | [2024-05-02T13:02:36.905+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=f79c0c93-5a17-4ce9-a84b-7ac1595d4fe0, expireMs=1714654986781] 13:04:13 kafka | [2024-05-02 13:02:15,643] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 13:04:13 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:47 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.437545175Z level=info msg="Executing migration" id="add column hidden to role table" 13:04:13 policy-pap | [2024-05-02T13:02:36.906+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpUpdate stopping listener 13:04:13 kafka | [2024-05-02 13:02:15,643] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 13:04:13 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:47 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.447085914Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=9.540239ms 13:04:13 policy-pap | [2024-05-02T13:02:36.906+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpUpdate stopped 13:04:13 kafka | [2024-05-02 13:02:15,648] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 13:04:13 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:47 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.451019739Z level=info msg="Executing migration" id="permission kind migration" 13:04:13 policy-pap | [2024-05-02T13:02:36.908+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 13:04:13 kafka | [2024-05-02 13:02:15,655] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) 13:04:13 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:47 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.460089609Z level=info msg="Migration successfully executed" id="permission kind migration" duration=9.06881ms 13:04:13 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f79c0c93-5a17-4ce9-a84b-7ac1595d4fe0","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"b5e972a4-d4dc-44d3-b1ee-eeb5e46402bb","timestampMs":1714654956858,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:04:13 kafka | [2024-05-02 13:02:15,666] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:48 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.464176727Z level=info msg="Executing migration" id="permission attribute migration" 13:04:13 policy-pap | [2024-05-02T13:02:36.909+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id f79c0c93-5a17-4ce9-a84b-7ac1595d4fe0 13:04:13 kafka | [2024-05-02 13:02:15,667] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:48 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.473191756Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=9.014729ms 13:04:13 policy-pap | [2024-05-02T13:02:36.914+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpUpdate successful 13:04:13 kafka | [2024-05-02 13:02:15,668] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 13:04:13 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:48 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.476224116Z level=info msg="Executing migration" id="permission identifier migration" 13:04:13 policy-pap | [2024-05-02T13:02:36.914+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed start publishing next request 13:04:13 kafka | [2024-05-02 13:02:15,669] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:48 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.485402768Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=9.177902ms 13:04:13 policy-pap | [2024-05-02T13:02:36.914+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpStateChange starting 13:04:13 kafka | [2024-05-02 13:02:15,669] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:48 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.491988318Z level=info msg="Executing migration" id="add permission identifier index" 13:04:13 policy-pap | [2024-05-02T13:02:36.914+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpStateChange starting listener 13:04:13 kafka | [2024-05-02 13:02:15,685] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:48 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.493221188Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.23244ms 13:04:13 policy-pap | [2024-05-02T13:02:36.915+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpStateChange starting timer 13:04:13 kafka | [2024-05-02 13:02:15,685] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:48 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.503319435Z level=info msg="Executing migration" id="add permission action scope role_id index" 13:04:13 policy-pap | [2024-05-02T13:02:36.915+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=80421e28-7b2d-4b5e-9bea-d992b890a3dd, expireMs=1714654986915] 13:04:13 kafka | [2024-05-02 13:02:15,685] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 13:04:13 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:48 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.504584536Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.264741ms 13:04:13 policy-pap | [2024-05-02T13:02:36.915+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpStateChange starting enqueue 13:04:13 kafka | [2024-05-02 13:02:15,686] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:48 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.508706075Z level=info msg="Executing migration" id="remove permission role_id action scope index" 13:04:13 policy-pap | [2024-05-02T13:02:36.915+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=80421e28-7b2d-4b5e-9bea-d992b890a3dd, expireMs=1714654986915] 13:04:13 kafka | [2024-05-02 13:02:15,686] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:48 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.509878144Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.171889ms 13:04:13 policy-pap | [2024-05-02T13:02:36.916+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 13:04:13 kafka | [2024-05-02 13:02:15,705] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:48 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.514174165Z level=info msg="Executing migration" id="create query_history table v1" 13:04:13 policy-pap | {"source":"pap-8314741a-bad7-42f4-9d4c-45e5809d9dbb","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"80421e28-7b2d-4b5e-9bea-d992b890a3dd","timestampMs":1714654956761,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:04:13 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:48 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.515224563Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.050528ms 13:04:13 kafka | [2024-05-02 13:02:15,706] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 policy-pap | [2024-05-02T13:02:36.916+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpStateChange started 13:04:13 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:48 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.518779552Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 13:04:13 kafka | [2024-05-02 13:02:15,706] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 13:04:13 policy-pap | [2024-05-02T13:02:36.929+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 13:04:13 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:48 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.519953701Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.17483ms 13:04:13 kafka | [2024-05-02 13:02:15,707] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 policy-pap | {"source":"pap-8314741a-bad7-42f4-9d4c-45e5809d9dbb","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"80421e28-7b2d-4b5e-9bea-d992b890a3dd","timestampMs":1714654956761,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:04:13 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:48 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.523363407Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 13:04:13 kafka | [2024-05-02 13:02:15,707] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 policy-pap | [2024-05-02T13:02:36.930+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 13:04:13 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:48 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.5235101Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=142.113µs 13:04:13 kafka | [2024-05-02 13:02:15,719] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 policy-pap | [2024-05-02T13:02:36.944+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 13:04:13 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:48 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.526888716Z level=info msg="Executing migration" id="rbac disabled migrator" 13:04:13 kafka | [2024-05-02 13:02:15,720] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"80421e28-7b2d-4b5e-9bea-d992b890a3dd","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"36edb522-a0ae-4fef-a0d3-0df22ea01716","timestampMs":1714654956932,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:04:13 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:48 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.527013518Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=124.612µs 13:04:13 kafka | [2024-05-02 13:02:15,720] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 13:04:13 policy-pap | [2024-05-02T13:02:36.946+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 80421e28-7b2d-4b5e-9bea-d992b890a3dd 13:04:13 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:49 13:04:13 policy-pap | [2024-05-02T13:02:36.954+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:04:13 kafka | [2024-05-02 13:02:15,720] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.531309739Z level=info msg="Executing migration" id="teams permissions migration" 13:04:13 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0205241301440800u 1 2024-05-02 13:01:49 13:04:13 policy-pap | {"source":"pap-8314741a-bad7-42f4-9d4c-45e5809d9dbb","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"80421e28-7b2d-4b5e-9bea-d992b890a3dd","timestampMs":1714654956761,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:04:13 kafka | [2024-05-02 13:02:15,721] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.531908859Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=599.12µs 13:04:13 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 0205241301440900u 1 2024-05-02 13:01:49 13:04:13 policy-pap | [2024-05-02T13:02:36.955+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 13:04:13 kafka | [2024-05-02 13:02:15,729] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.535207604Z level=info msg="Executing migration" id="dashboard permissions" 13:04:13 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 0205241301440900u 1 2024-05-02 13:01:49 13:04:13 policy-pap | [2024-05-02T13:02:36.960+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:04:13 kafka | [2024-05-02 13:02:15,735] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.535911685Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=704.231µs 13:04:13 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 0205241301440900u 1 2024-05-02 13:01:49 13:04:13 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"80421e28-7b2d-4b5e-9bea-d992b890a3dd","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"36edb522-a0ae-4fef-a0d3-0df22ea01716","timestampMs":1714654956932,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:04:13 kafka | [2024-05-02 13:02:15,735] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.539609647Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 13:04:13 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 0205241301440900u 1 2024-05-02 13:01:49 13:04:13 policy-pap | [2024-05-02T13:02:36.961+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpStateChange stopping 13:04:13 kafka | [2024-05-02 13:02:15,735] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.540354449Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=744.142µs 13:04:13 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 0205241301440900u 1 2024-05-02 13:01:49 13:04:13 policy-pap | [2024-05-02T13:02:36.961+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpStateChange stopping enqueue 13:04:13 kafka | [2024-05-02 13:02:15,736] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.547001419Z level=info msg="Executing migration" id="drop managed folder create actions" 13:04:13 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 0205241301440900u 1 2024-05-02 13:01:49 13:04:13 policy-pap | [2024-05-02T13:02:36.961+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpStateChange stopping timer 13:04:13 kafka | [2024-05-02 13:02:15,747] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.547332875Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=330.606µs 13:04:13 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0205241301440900u 1 2024-05-02 13:01:49 13:04:13 policy-pap | [2024-05-02T13:02:36.961+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=80421e28-7b2d-4b5e-9bea-d992b890a3dd, expireMs=1714654986915] 13:04:13 kafka | [2024-05-02 13:02:15,747] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.55185502Z level=info msg="Executing migration" id="alerting notification permissions" 13:04:13 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0205241301440900u 1 2024-05-02 13:01:49 13:04:13 policy-pap | [2024-05-02T13:02:36.962+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpStateChange stopping listener 13:04:13 kafka | [2024-05-02 13:02:15,748] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.552609762Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=754.692µs 13:04:13 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0205241301440900u 1 2024-05-02 13:01:49 13:04:13 policy-pap | [2024-05-02T13:02:36.962+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpStateChange stopped 13:04:13 kafka | [2024-05-02 13:02:15,748] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.555925257Z level=info msg="Executing migration" id="create query_history_star table v1" 13:04:13 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 0205241301440900u 1 2024-05-02 13:01:49 13:04:13 policy-pap | [2024-05-02T13:02:36.962+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpStateChange successful 13:04:13 kafka | [2024-05-02 13:02:15,748] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.557485223Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.559276ms 13:04:13 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 0205241301440900u 1 2024-05-02 13:01:49 13:04:13 policy-pap | [2024-05-02T13:02:36.962+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed start publishing next request 13:04:13 kafka | [2024-05-02 13:02:15,756] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.561083932Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 13:04:13 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 0205241301440900u 1 2024-05-02 13:01:49 13:04:13 policy-pap | [2024-05-02T13:02:36.962+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpUpdate starting 13:04:13 kafka | [2024-05-02 13:02:15,758] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.562257492Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.17345ms 13:04:13 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 0205241301440900u 1 2024-05-02 13:01:49 13:04:13 policy-pap | [2024-05-02T13:02:36.962+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpUpdate starting listener 13:04:13 kafka | [2024-05-02 13:02:15,759] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.567244155Z level=info msg="Executing migration" id="add column org_id in query_history_star" 13:04:13 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 0205241301441000u 1 2024-05-02 13:01:49 13:04:13 policy-pap | [2024-05-02T13:02:36.963+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpUpdate starting timer 13:04:13 kafka | [2024-05-02 13:02:15,759] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.576852134Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=9.587158ms 13:04:13 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 0205241301441000u 1 2024-05-02 13:01:49 13:04:13 policy-pap | [2024-05-02T13:02:36.963+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=9b29a1ef-f55b-45e6-9606-30cb56a7910e, expireMs=1714654986963] 13:04:13 kafka | [2024-05-02 13:02:15,759] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.582604289Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 13:04:13 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 0205241301441000u 1 2024-05-02 13:01:49 13:04:13 policy-pap | [2024-05-02T13:02:36.963+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpUpdate starting enqueue 13:04:13 kafka | [2024-05-02 13:02:15,767] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.582748671Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=143.962µs 13:04:13 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 0205241301441000u 1 2024-05-02 13:01:49 13:04:13 policy-pap | [2024-05-02T13:02:36.963+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpUpdate started 13:04:13 kafka | [2024-05-02 13:02:15,768] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.589987131Z level=info msg="Executing migration" id="create correlation table v1" 13:04:13 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 0205241301441000u 1 2024-05-02 13:01:50 13:04:13 policy-pap | [2024-05-02T13:02:36.963+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 13:04:13 kafka | [2024-05-02 13:02:15,768] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.591158681Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.1708ms 13:04:13 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 0205241301441000u 1 2024-05-02 13:01:50 13:04:13 policy-pap | {"source":"pap-8314741a-bad7-42f4-9d4c-45e5809d9dbb","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"9b29a1ef-f55b-45e6-9606-30cb56a7910e","timestampMs":1714654956943,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:04:13 kafka | [2024-05-02 13:02:15,768] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.597633798Z level=info msg="Executing migration" id="add index correlations.uid" 13:04:13 policy-pap | [2024-05-02T13:02:36.976+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 13:04:13 kafka | [2024-05-02 13:02:15,768] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.598745117Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.107278ms 13:04:13 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 0205241301441000u 1 2024-05-02 13:01:50 13:04:13 kafka | [2024-05-02 13:02:15,776] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.602124743Z level=info msg="Executing migration" id="add index correlations.source_uid" 13:04:13 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 0205241301441000u 1 2024-05-02 13:01:50 13:04:13 policy-pap | {"source":"pap-8314741a-bad7-42f4-9d4c-45e5809d9dbb","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"9b29a1ef-f55b-45e6-9606-30cb56a7910e","timestampMs":1714654956943,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:04:13 kafka | [2024-05-02 13:02:15,777] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.603613357Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.488475ms 13:04:13 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 0205241301441000u 1 2024-05-02 13:01:50 13:04:13 policy-pap | [2024-05-02T13:02:36.976+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 13:04:13 kafka | [2024-05-02 13:02:15,777] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.607873948Z level=info msg="Executing migration" id="add correlation config column" 13:04:13 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 0205241301441100u 1 2024-05-02 13:01:50 13:04:13 policy-pap | [2024-05-02T13:02:36.978+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:04:13 kafka | [2024-05-02 13:02:15,777] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 0205241301441200u 1 2024-05-02 13:01:50 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.617493717Z level=info msg="Migration successfully executed" id="add correlation config column" duration=9.618959ms 13:04:13 policy-pap | {"source":"pap-8314741a-bad7-42f4-9d4c-45e5809d9dbb","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"9b29a1ef-f55b-45e6-9606-30cb56a7910e","timestampMs":1714654956943,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:04:13 kafka | [2024-05-02 13:02:15,777] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 0205241301441200u 1 2024-05-02 13:01:50 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.624642486Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 13:04:13 policy-pap | [2024-05-02T13:02:36.979+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 13:04:13 kafka | [2024-05-02 13:02:15,791] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 0205241301441200u 1 2024-05-02 13:01:50 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.625740024Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.097758ms 13:04:13 policy-pap | [2024-05-02T13:02:36.985+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 13:04:13 kafka | [2024-05-02 13:02:15,793] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 0205241301441200u 1 2024-05-02 13:01:50 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.63634485Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 13:04:13 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"9b29a1ef-f55b-45e6-9606-30cb56a7910e","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"2196446e-aee6-4b11-8da2-3f27281f9ed9","timestampMs":1714654956975,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:04:13 kafka | [2024-05-02 13:02:15,793] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 13:04:13 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 0205241301441300u 1 2024-05-02 13:01:50 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.637734613Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.389143ms 13:04:13 policy-pap | [2024-05-02T13:02:36.986+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 9b29a1ef-f55b-45e6-9606-30cb56a7910e 13:04:13 kafka | [2024-05-02 13:02:15,795] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 0205241301441300u 1 2024-05-02 13:01:50 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.641422594Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 13:04:13 policy-pap | [2024-05-02T13:02:36.988+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:04:13 kafka | [2024-05-02 13:02:15,795] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 0205241301441300u 1 2024-05-02 13:01:50 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.665094166Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=23.670792ms 13:04:13 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"9b29a1ef-f55b-45e6-9606-30cb56a7910e","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"2196446e-aee6-4b11-8da2-3f27281f9ed9","timestampMs":1714654956975,"name":"apex-7e7a4170-6764-4797-b24f-8933463e83ed","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:04:13 kafka | [2024-05-02 13:02:15,802] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 policy-db-migrator | policyadmin: OK @ 1300 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.672715842Z level=info msg="Executing migration" id="create correlation v2" 13:04:13 policy-pap | [2024-05-02T13:02:36.989+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpUpdate stopping 13:04:13 kafka | [2024-05-02 13:02:15,803] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 policy-pap | [2024-05-02T13:02:36.989+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpUpdate stopping enqueue 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.673662218Z level=info msg="Migration successfully executed" id="create correlation v2" duration=946.026µs 13:04:13 policy-pap | [2024-05-02T13:02:36.989+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpUpdate stopping timer 13:04:13 kafka | [2024-05-02 13:02:15,803] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.677273308Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 13:04:13 policy-pap | [2024-05-02T13:02:36.989+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=9b29a1ef-f55b-45e6-9606-30cb56a7910e, expireMs=1714654986963] 13:04:13 kafka | [2024-05-02 13:02:15,804] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.679303191Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=2.029803ms 13:04:13 policy-pap | [2024-05-02T13:02:36.989+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpUpdate stopping listener 13:04:13 kafka | [2024-05-02 13:02:15,804] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.683601693Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 13:04:13 policy-pap | [2024-05-02T13:02:36.989+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpUpdate stopped 13:04:13 kafka | [2024-05-02 13:02:15,817] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.684877704Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.272501ms 13:04:13 policy-pap | [2024-05-02T13:02:36.994+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed PdpUpdate successful 13:04:13 kafka | [2024-05-02 13:02:15,818] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.689821676Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 13:04:13 policy-pap | [2024-05-02T13:02:36.994+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-7e7a4170-6764-4797-b24f-8933463e83ed has no more requests 13:04:13 kafka | [2024-05-02 13:02:15,818] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.691134187Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.309391ms 13:04:13 policy-pap | [2024-05-02T13:02:41.484+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. 13:04:13 kafka | [2024-05-02 13:02:15,819] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.734859922Z level=info msg="Executing migration" id="copy correlation v1 to v2" 13:04:13 policy-pap | [2024-05-02T13:02:41.549+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 13:04:13 kafka | [2024-05-02 13:02:15,820] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.735626455Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=766.143µs 13:04:13 policy-pap | [2024-05-02T13:02:41.562+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 13:04:13 kafka | [2024-05-02 13:02:15,830] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.742889305Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 13:04:13 policy-pap | [2024-05-02T13:02:41.564+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 13:04:13 kafka | [2024-05-02 13:02:15,831] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.744009114Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.119058ms 13:04:13 policy-pap | [2024-05-02T13:02:41.999+00:00|INFO|SessionData|http-nio-6969-exec-7] unknown group testGroup 13:04:13 kafka | [2024-05-02 13:02:15,832] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.748114442Z level=info msg="Executing migration" id="add provisioning column" 13:04:13 policy-pap | [2024-05-02T13:02:42.567+00:00|INFO|SessionData|http-nio-6969-exec-7] create cached group testGroup 13:04:13 kafka | [2024-05-02 13:02:15,832] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.757754101Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.63663ms 13:04:13 policy-pap | [2024-05-02T13:02:42.568+00:00|INFO|SessionData|http-nio-6969-exec-7] creating DB group testGroup 13:04:13 kafka | [2024-05-02 13:02:15,832] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.762627962Z level=info msg="Executing migration" id="create entity_events table" 13:04:13 policy-pap | [2024-05-02T13:02:43.126+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 13:04:13 kafka | [2024-05-02 13:02:15,843] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.763564998Z level=info msg="Migration successfully executed" id="create entity_events table" duration=936.686µs 13:04:13 policy-pap | [2024-05-02T13:02:43.320+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 13:04:13 kafka | [2024-05-02 13:02:15,843] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.766976234Z level=info msg="Executing migration" id="create dashboard public config v1" 13:04:13 policy-pap | [2024-05-02T13:02:43.417+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 13:04:13 kafka | [2024-05-02 13:02:15,844] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.767878439Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=901.355µs 13:04:13 policy-pap | [2024-05-02T13:02:43.417+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup 13:04:13 kafka | [2024-05-02 13:02:15,844] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.77035365Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 13:04:13 policy-pap | [2024-05-02T13:02:43.418+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup 13:04:13 kafka | [2024-05-02 13:02:15,844] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.771092002Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 13:04:13 policy-pap | [2024-05-02T13:02:43.431+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-05-02T13:02:43Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-05-02T13:02:43Z, user=policyadmin)] 13:04:13 kafka | [2024-05-02 13:02:15,863] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.774807754Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 13:04:13 policy-pap | [2024-05-02T13:02:44.141+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group testGroup 13:04:13 kafka | [2024-05-02 13:02:15,870] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.775433314Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 13:04:13 policy-pap | [2024-05-02T13:02:44.142+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 13:04:13 kafka | [2024-05-02 13:02:15,871] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.780076631Z level=info msg="Executing migration" id="Drop old dashboard public config table" 13:04:13 policy-pap | [2024-05-02T13:02:44.143+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.restart.tca 1.0.0 13:04:13 kafka | [2024-05-02 13:02:15,871] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.781028227Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=950.916µs 13:04:13 policy-pap | [2024-05-02T13:02:44.143+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group testGroup 13:04:13 kafka | [2024-05-02 13:02:15,871] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.784955102Z level=info msg="Executing migration" id="recreate dashboard public config v1" 13:04:13 policy-pap | [2024-05-02T13:02:44.143+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group testGroup 13:04:13 kafka | [2024-05-02 13:02:15,883] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.786463547Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.507205ms 13:04:13 policy-pap | [2024-05-02T13:02:44.154+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-05-02T13:02:44Z, user=policyadmin)] 13:04:13 kafka | [2024-05-02 13:02:15,886] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.79208113Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 13:04:13 policy-pap | [2024-05-02T13:02:44.578+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup 13:04:13 kafka | [2024-05-02 13:02:15,886] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.79450903Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=2.42626ms 13:04:13 policy-pap | [2024-05-02T13:02:44.578+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 13:04:13 kafka | [2024-05-02 13:02:15,886] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.800477979Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 13:04:13 policy-pap | [2024-05-02T13:02:44.578+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 13:04:13 kafka | [2024-05-02 13:02:15,886] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.802849288Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=2.337589ms 13:04:13 policy-pap | [2024-05-02T13:02:44.579+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 13:04:13 kafka | [2024-05-02 13:02:15,897] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.80776541Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 13:04:13 policy-pap | [2024-05-02T13:02:44.579+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 13:04:13 kafka | [2024-05-02 13:02:15,897] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.80899762Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.23206ms 13:04:13 policy-pap | [2024-05-02T13:02:44.579+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 13:04:13 kafka | [2024-05-02 13:02:15,897] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.815330095Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 13:04:13 policy-pap | [2024-05-02T13:02:44.590+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-05-02T13:02:44Z, user=policyadmin)] 13:04:13 kafka | [2024-05-02 13:02:15,911] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.816738619Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.410214ms 13:04:13 policy-pap | [2024-05-02T13:03:05.183+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 13:04:13 kafka | [2024-05-02 13:02:15,911] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.819617706Z level=info msg="Executing migration" id="Drop public config table" 13:04:13 policy-pap | [2024-05-02T13:03:05.185+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 13:04:13 kafka | [2024-05-02 13:02:15,919] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.820725715Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.108139ms 13:04:13 policy-pap | [2024-05-02T13:03:06.781+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=f79c0c93-5a17-4ce9-a84b-7ac1595d4fe0, expireMs=1714654986781] 13:04:13 kafka | [2024-05-02 13:02:15,919] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.824201662Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 13:04:13 policy-pap | [2024-05-02T13:03:06.915+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=80421e28-7b2d-4b5e-9bea-d992b890a3dd, expireMs=1714654986915] 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.826169505Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.966003ms 13:04:13 kafka | [2024-05-02 13:02:15,920] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.830469686Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 13:04:13 kafka | [2024-05-02 13:02:15,920] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.833083709Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=2.623063ms 13:04:13 kafka | [2024-05-02 13:02:15,920] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.837234248Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 13:04:13 kafka | [2024-05-02 13:02:15,936] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.838203354Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=968.636µs 13:04:13 kafka | [2024-05-02 13:02:15,936] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.8409651Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 13:04:13 kafka | [2024-05-02 13:02:15,937] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:15,937] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:15,937] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.843597214Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=2.625454ms 13:04:13 kafka | [2024-05-02 13:02:15,997] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.851867471Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 13:04:13 kafka | [2024-05-02 13:02:15,998] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.870943447Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=19.073696ms 13:04:13 kafka | [2024-05-02 13:02:15,998] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.87357817Z level=info msg="Executing migration" id="add annotations_enabled column" 13:04:13 kafka | [2024-05-02 13:02:15,998] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.879804664Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.226254ms 13:04:13 kafka | [2024-05-02 13:02:15,998] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.884055624Z level=info msg="Executing migration" id="add time_selection_enabled column" 13:04:13 kafka | [2024-05-02 13:02:16,005] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.894482317Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=10.432293ms 13:04:13 kafka | [2024-05-02 13:02:16,006] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.903014968Z level=info msg="Executing migration" id="delete orphaned public dashboards" 13:04:13 kafka | [2024-05-02 13:02:16,006] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.903313393Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=298.105µs 13:04:13 kafka | [2024-05-02 13:02:16,006] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.910478982Z level=info msg="Executing migration" id="add share column" 13:04:13 kafka | [2024-05-02 13:02:16,006] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.917408817Z level=info msg="Migration successfully executed" id="add share column" duration=6.932525ms 13:04:13 kafka | [2024-05-02 13:02:16,014] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.922502581Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 13:04:13 kafka | [2024-05-02 13:02:16,014] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.922719675Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=217.104µs 13:04:13 kafka | [2024-05-02 13:02:16,015] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.925657273Z level=info msg="Executing migration" id="create file table" 13:04:13 kafka | [2024-05-02 13:02:16,015] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.92663106Z level=info msg="Migration successfully executed" id="create file table" duration=973.306µs 13:04:13 kafka | [2024-05-02 13:02:16,015] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.929730851Z level=info msg="Executing migration" id="file table idx: path natural pk" 13:04:13 kafka | [2024-05-02 13:02:16,023] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.93148947Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.757029ms 13:04:13 kafka | [2024-05-02 13:02:16,024] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.937136554Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 13:04:13 kafka | [2024-05-02 13:02:16,024] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.938938533Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.802489ms 13:04:13 kafka | [2024-05-02 13:02:16,024] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.942849918Z level=info msg="Executing migration" id="create file_meta table" 13:04:13 kafka | [2024-05-02 13:02:16,024] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.943713553Z level=info msg="Migration successfully executed" id="create file_meta table" duration=863.415µs 13:04:13 kafka | [2024-05-02 13:02:16,036] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.947979353Z level=info msg="Executing migration" id="file table idx: path key" 13:04:13 kafka | [2024-05-02 13:02:16,036] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.949975456Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.994443ms 13:04:13 kafka | [2024-05-02 13:02:16,036] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.955088451Z level=info msg="Executing migration" id="set path collation in file table" 13:04:13 kafka | [2024-05-02 13:02:16,036] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.955190893Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=103.432µs 13:04:13 kafka | [2024-05-02 13:02:16,036] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.961432806Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 13:04:13 kafka | [2024-05-02 13:02:16,042] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.961617099Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=190.063µs 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.967470336Z level=info msg="Executing migration" id="managed permissions migration" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.968197918Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=726.782µs 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.972341477Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.972807565Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=465.368µs 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.979375934Z level=info msg="Executing migration" id="RBAC action name migrator" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.980868738Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.492905ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.989644644Z level=info msg="Executing migration" id="Add UID column to playlist" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:46.999723961Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=10.088398ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.006262096Z level=info msg="Executing migration" id="Update uid column values in playlist" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.006413518Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=132.032µs 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.0093603Z level=info msg="Executing migration" id="Add index for uid in playlist" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.01137569Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=2.01402ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.014685317Z level=info msg="Executing migration" id="update group index for alert rules" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.015465388Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=780.382µs 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.020294007Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.02051404Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=220.223µs 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.025449061Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.025947798Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=500.167µs 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.035119699Z level=info msg="Executing migration" id="add action column to seed_assignment" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.047830711Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=12.710752ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.050774903Z level=info msg="Executing migration" id="add scope column to seed_assignment" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.059984685Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=9.209872ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.0631606Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.064022803Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=860.063µs 13:04:13 kafka | [2024-05-02 13:02:16,043] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,043] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,043] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,043] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,048] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 kafka | [2024-05-02 13:02:16,049] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,049] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,049] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,049] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,055] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 kafka | [2024-05-02 13:02:16,056] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,056] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,056] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,056] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,065] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 kafka | [2024-05-02 13:02:16,066] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,066] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,066] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,066] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,074] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 kafka | [2024-05-02 13:02:16,075] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,075] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,075] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,075] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,083] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 kafka | [2024-05-02 13:02:16,083] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,084] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,084] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,084] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,092] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 kafka | [2024-05-02 13:02:16,092] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,092] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,093] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,093] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,102] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.072178329Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.149458435Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=77.281366ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.156324413Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.15750668Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.177997ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.161235183Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.16243039Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.194807ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.165750228Z level=info msg="Executing migration" id="add primary key to seed_assigment" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.190557733Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=24.807365ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.196040491Z level=info msg="Executing migration" id="add origin column to seed_assignment" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.204277099Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=8.235338ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.207409194Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.207747029Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=337.535µs 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.210735981Z level=info msg="Executing migration" id="prevent seeding OnCall access" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.210965025Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=228.834µs 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.213862856Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.21411003Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=246.944µs 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.217274935Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.217707971Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=435.606µs 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.221391484Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.22181262Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=420.776µs 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.227580052Z level=info msg="Executing migration" id="create folder table" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.229068464Z level=info msg="Migration successfully executed" id="create folder table" duration=1.488942ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.232400721Z level=info msg="Executing migration" id="Add index for parent_uid" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.234510822Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=2.10441ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.237574285Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.238771393Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.197367ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.241992949Z level=info msg="Executing migration" id="Update folder title length" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.242022939Z level=info msg="Migration successfully executed" id="Update folder title length" duration=30.43µs 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.246614595Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.248585333Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.970468ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.25189294Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.252958325Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.064675ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.255946378Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.257117325Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.170717ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.260733457Z level=info msg="Executing migration" id="Sync dashboard and folder table" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.261195273Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=462.896µs 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.265273272Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.265527875Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=254.333µs 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.268461687Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.269854097Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.39178ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.273211315Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.275495168Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=2.283683ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.28333111Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.284394345Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.063065ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.288171299Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.290042026Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.870207ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.29310302Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.294764913Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.661583ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.298333895Z level=info msg="Executing migration" id="create anon_device table" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.299307028Z level=info msg="Migration successfully executed" id="create anon_device table" duration=971.183µs 13:04:13 kafka | [2024-05-02 13:02:16,103] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,103] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,104] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,104] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,114] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 kafka | [2024-05-02 13:02:16,115] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,115] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,115] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,116] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,122] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 kafka | [2024-05-02 13:02:16,123] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,123] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,123] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,123] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,132] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 kafka | [2024-05-02 13:02:16,133] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,133] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,133] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,134] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,140] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 kafka | [2024-05-02 13:02:16,141] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,141] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,141] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,141] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,149] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 kafka | [2024-05-02 13:02:16,149] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,149] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,149] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,149] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,155] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 kafka | [2024-05-02 13:02:16,156] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,156] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,156] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,156] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,166] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 kafka | [2024-05-02 13:02:16,167] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,167] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,167] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,167] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,175] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 kafka | [2024-05-02 13:02:16,175] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,175] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,175] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,176] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,187] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 kafka | [2024-05-02 13:02:16,188] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,188] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,188] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,188] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,198] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 kafka | [2024-05-02 13:02:16,200] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,200] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,200] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,200] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,208] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 kafka | [2024-05-02 13:02:16,209] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,209] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,209] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,209] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,216] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 kafka | [2024-05-02 13:02:16,217] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,217] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,217] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,217] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,223] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 kafka | [2024-05-02 13:02:16,223] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,223] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,223] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,223] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.302317452Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.303718482Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.40059ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.306757995Z level=info msg="Executing migration" id="add index anon_device.updated_at" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.308241866Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.481871ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.311377951Z level=info msg="Executing migration" id="create signing_key table" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.312411896Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.033345ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.31831687Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.320450831Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=2.134481ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.324278876Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.325496023Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.217177ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.328501766Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.328878931Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=375.745µs 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.331158954Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.341200708Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=10.038934ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.344555776Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.345203295Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=647.589µs 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.347330235Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.348250609Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=920.004µs 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.353152899Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.354306015Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.153186ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.357752715Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.358921151Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.168967ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.361668081Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.362945029Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.276878ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.365789189Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.367012847Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.223088ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.369930839Z level=info msg="Executing migration" id="create sso_setting table" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.371105206Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.172646ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.376355901Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.37773241Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.377209ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.380929636Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.381424153Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=495.487µs 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.386636298Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.386848811Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=218.683µs 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.395330112Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.404575224Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=9.242312ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.407662928Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.420084236Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=12.421808ms 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.423664947Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.423985002Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=320.365µs 13:04:13 grafana | logger=migrator t=2024-05-02T13:01:47.426922244Z level=info msg="migrations completed" performed=548 skipped=0 duration=4.179091941s 13:04:13 grafana | logger=sqlstore t=2024-05-02T13:01:47.435386185Z level=info msg="Created default admin" user=admin 13:04:13 grafana | logger=sqlstore t=2024-05-02T13:01:47.435651969Z level=info msg="Created default organization" 13:04:13 grafana | logger=secrets t=2024-05-02T13:01:47.44064739Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 13:04:13 grafana | logger=plugin.store t=2024-05-02T13:01:47.466578961Z level=info msg="Loading plugins..." 13:04:13 grafana | logger=local.finder t=2024-05-02T13:01:47.533555249Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 13:04:13 grafana | logger=plugin.store t=2024-05-02T13:01:47.53359455Z level=info msg="Plugins loaded" count=55 duration=67.016039ms 13:04:13 grafana | logger=query_data t=2024-05-02T13:01:47.53641978Z level=info msg="Query Service initialization" 13:04:13 grafana | logger=live.push_http t=2024-05-02T13:01:47.539638716Z level=info msg="Live Push Gateway initialization" 13:04:13 grafana | logger=ngalert.migration t=2024-05-02T13:01:47.572247943Z level=info msg=Starting 13:04:13 grafana | logger=ngalert.migration t=2024-05-02T13:01:47.572945643Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 13:04:13 grafana | logger=ngalert.migration orgID=1 t=2024-05-02T13:01:47.57344156Z level=info msg="Migrating alerts for organisation" 13:04:13 kafka | [2024-05-02 13:02:16,230] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 kafka | [2024-05-02 13:02:16,231] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,231] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,231] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,232] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,238] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 kafka | [2024-05-02 13:02:16,238] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,238] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,238] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,238] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,245] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 kafka | [2024-05-02 13:02:16,245] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,245] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,245] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,245] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,256] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 kafka | [2024-05-02 13:02:16,257] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,257] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,257] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,258] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,265] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:04:13 kafka | [2024-05-02 13:02:16,265] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:04:13 kafka | [2024-05-02 13:02:16,265] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,265] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 13:04:13 kafka | [2024-05-02 13:02:16,265] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(4WMw632vQDSuZYp6c_DPsA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,272] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,272] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,272] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,273] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,274] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,274] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,274] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,274] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,274] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,274] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,274] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,274] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,274] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,274] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,274] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,274] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,274] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,274] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,274] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,274] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,274] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 13:04:13 grafana | logger=ngalert.migration orgID=1 t=2024-05-02T13:01:47.574256472Z level=info msg="Alerts found to migrate" alerts=0 13:04:13 grafana | logger=ngalert.migration t=2024-05-02T13:01:47.576353352Z level=info msg="Completed alerting migration" 13:04:13 grafana | logger=ngalert.state.manager t=2024-05-02T13:01:47.597610746Z level=info msg="Running in alternative execution of Error/NoData mode" 13:04:13 grafana | logger=infra.usagestats.collector t=2024-05-02T13:01:47.599605434Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 13:04:13 grafana | logger=provisioning.datasources t=2024-05-02T13:01:47.602378274Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 13:04:13 grafana | logger=provisioning.alerting t=2024-05-02T13:01:47.618354062Z level=info msg="starting to provision alerting" 13:04:13 grafana | logger=provisioning.alerting t=2024-05-02T13:01:47.618379373Z level=info msg="finished to provision alerting" 13:04:13 grafana | logger=ngalert.state.manager t=2024-05-02T13:01:47.618773048Z level=info msg="Warming state cache for startup" 13:04:13 grafana | logger=ngalert.multiorg.alertmanager t=2024-05-02T13:01:47.61886305Z level=info msg="Starting MultiOrg Alertmanager" 13:04:13 grafana | logger=ngalert.state.manager t=2024-05-02T13:01:47.619735512Z level=info msg="State cache has been initialized" states=0 duration=960.694µs 13:04:13 grafana | logger=ngalert.scheduler t=2024-05-02T13:01:47.619792133Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 13:04:13 grafana | logger=ticker t=2024-05-02T13:01:47.619917645Z level=info msg=starting first_tick=2024-05-02T13:01:50Z 13:04:13 grafana | logger=http.server t=2024-05-02T13:01:47.621094292Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 13:04:13 grafana | logger=grafanaStorageLogger t=2024-05-02T13:01:47.638938657Z level=info msg="Storage starting" 13:04:13 grafana | logger=provisioning.dashboard t=2024-05-02T13:01:47.677324926Z level=info msg="starting to provision dashboards" 13:04:13 grafana | logger=plugins.update.checker t=2024-05-02T13:01:47.721423857Z level=info msg="Update check succeeded" duration=98.930995ms 13:04:13 grafana | logger=grafana.update.checker t=2024-05-02T13:01:47.724484031Z level=info msg="Update check succeeded" duration=102.802291ms 13:04:13 grafana | logger=grafana-apiserver t=2024-05-02T13:01:47.875935467Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 13:04:13 grafana | logger=grafana-apiserver t=2024-05-02T13:01:47.876544766Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 13:04:13 grafana | logger=provisioning.dashboard t=2024-05-02T13:01:47.994331811Z level=info msg="finished to provision dashboards" 13:04:13 grafana | logger=infra.usagestats t=2024-05-02T13:03:12.634231542Z level=info msg="Usage stats are ready to report" 13:04:13 kafka | [2024-05-02 13:02:16,274] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,274] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,274] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,276] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,281] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,282] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,285] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,285] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,285] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,285] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,285] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,285] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,285] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,285] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,285] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,285] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,285] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,285] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,285] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,285] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,285] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,285] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,285] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,285] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,285] INFO [Broker id=1] Finished LeaderAndIsr request in 666ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,287] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=4WMw632vQDSuZYp6c_DPsA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,291] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,292] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 13:04:13 kafka | [2024-05-02 13:02:16,295] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 13 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,295] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,295] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,296] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,296] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,296] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,296] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,297] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,297] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,297] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,298] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,298] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,298] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,298] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,299] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,299] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,299] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,299] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,299] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,300] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 17 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,300] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,300] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,300] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,300] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,301] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 17 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,301] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,301] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,301] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,301] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,302] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 18 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,305] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 21 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,305] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,306] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 22 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,306] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,306] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,306] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,306] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,307] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,307] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,307] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,307] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,307] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,307] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,307] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,307] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,307] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,307] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,308] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 23 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:04:13 kafka | [2024-05-02 13:02:16,352] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group ad46f4cb-cb07-4411-8d0e-379eef1836ce in Empty state. Created a new member id consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3-bba39724-613b-4f99-a8cc-60347d325b0c and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,357] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-99bb646c-54eb-4071-b10f-cc08b6bdb05d and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,365] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-99bb646c-54eb-4071-b10f-cc08b6bdb05d with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:16,366] INFO [GroupCoordinator 1]: Preparing to rebalance group ad46f4cb-cb07-4411-8d0e-379eef1836ce in state PreparingRebalance with old generation 0 (__consumer_offsets-26) (reason: Adding new member consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3-bba39724-613b-4f99-a8cc-60347d325b0c with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:17,001] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group c1ea3ecb-3042-4296-b7e8-b195f884ad84 in Empty state. Created a new member id consumer-c1ea3ecb-3042-4296-b7e8-b195f884ad84-2-159e516d-7b31-454d-a5ae-5ba81c7e6592 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:17,006] INFO [GroupCoordinator 1]: Preparing to rebalance group c1ea3ecb-3042-4296-b7e8-b195f884ad84 in state PreparingRebalance with old generation 0 (__consumer_offsets-41) (reason: Adding new member consumer-c1ea3ecb-3042-4296-b7e8-b195f884ad84-2-159e516d-7b31-454d-a5ae-5ba81c7e6592 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:19,377] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:19,391] INFO [GroupCoordinator 1]: Stabilized group ad46f4cb-cb07-4411-8d0e-379eef1836ce generation 1 (__consumer_offsets-26) with 1 members (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:19,405] INFO [GroupCoordinator 1]: Assignment received from leader consumer-ad46f4cb-cb07-4411-8d0e-379eef1836ce-3-bba39724-613b-4f99-a8cc-60347d325b0c for group ad46f4cb-cb07-4411-8d0e-379eef1836ce for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:19,405] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-99bb646c-54eb-4071-b10f-cc08b6bdb05d for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:20,007] INFO [GroupCoordinator 1]: Stabilized group c1ea3ecb-3042-4296-b7e8-b195f884ad84 generation 1 (__consumer_offsets-41) with 1 members (kafka.coordinator.group.GroupCoordinator) 13:04:13 kafka | [2024-05-02 13:02:20,023] INFO [GroupCoordinator 1]: Assignment received from leader consumer-c1ea3ecb-3042-4296-b7e8-b195f884ad84-2-159e516d-7b31-454d-a5ae-5ba81c7e6592 for group c1ea3ecb-3042-4296-b7e8-b195f884ad84 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 13:04:13 ++ echo 'Tearing down containers...' 13:04:13 Tearing down containers... 13:04:13 ++ docker-compose down -v --remove-orphans 13:04:13 Stopping policy-apex-pdp ... 13:04:13 Stopping grafana ... 13:04:13 Stopping policy-pap ... 13:04:13 Stopping kafka ... 13:04:13 Stopping policy-api ... 13:04:13 Stopping simulator ... 13:04:13 Stopping mariadb ... 13:04:13 Stopping prometheus ... 13:04:13 Stopping zookeeper ... 13:04:14 Stopping grafana ... done 13:04:14 Stopping prometheus ... done 13:04:23 Stopping policy-apex-pdp ... done 13:04:34 Stopping policy-pap ... done 13:04:34 Stopping simulator ... done 13:04:35 Stopping mariadb ... done 13:04:35 Stopping kafka ... done 13:04:36 Stopping zookeeper ... done 13:04:44 Stopping policy-api ... done 13:04:44 Removing policy-apex-pdp ... 13:04:44 Removing grafana ... 13:04:44 Removing policy-pap ... 13:04:44 Removing kafka ... 13:04:44 Removing policy-api ... 13:04:44 Removing policy-db-migrator ... 13:04:44 Removing simulator ... 13:04:44 Removing mariadb ... 13:04:44 Removing prometheus ... 13:04:44 Removing zookeeper ... 13:04:44 Removing policy-db-migrator ... done 13:04:44 Removing policy-api ... done 13:04:44 Removing simulator ... done 13:04:44 Removing prometheus ... done 13:04:44 Removing policy-apex-pdp ... done 13:04:44 Removing grafana ... done 13:04:44 Removing mariadb ... done 13:04:44 Removing policy-pap ... done 13:04:44 Removing kafka ... done 13:04:44 Removing zookeeper ... done 13:04:44 Removing network compose_default 13:04:45 ++ cd /w/workspace/policy-pap-master-project-csit-pap 13:04:45 + load_set 13:04:45 + _setopts=hxB 13:04:45 ++ echo braceexpand:hashall:interactive-comments:xtrace 13:04:45 ++ tr : ' ' 13:04:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:04:45 + set +o braceexpand 13:04:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:04:45 + set +o hashall 13:04:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:04:45 + set +o interactive-comments 13:04:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:04:45 + set +o xtrace 13:04:45 ++ echo hxB 13:04:45 ++ sed 's/./& /g' 13:04:45 + for i in $(echo "$_setopts" | sed 's/./& /g') 13:04:45 + set +h 13:04:45 + for i in $(echo "$_setopts" | sed 's/./& /g') 13:04:45 + set +x 13:04:45 + rsync /w/workspace/policy-pap-master-project-csit-pap/compose/docker_compose.log /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 13:04:45 + [[ -n /tmp/tmp.o3nGSSjcc7 ]] 13:04:45 + rsync -av /tmp/tmp.o3nGSSjcc7/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 13:04:45 sending incremental file list 13:04:45 ./ 13:04:45 log.html 13:04:45 output.xml 13:04:45 report.html 13:04:45 testplan.txt 13:04:45 13:04:45 sent 919,247 bytes received 95 bytes 1,838,684.00 bytes/sec 13:04:45 total size is 918,701 speedup is 1.00 13:04:45 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 13:04:45 + exit 0 13:04:45 $ ssh-agent -k 13:04:45 unset SSH_AUTH_SOCK; 13:04:45 unset SSH_AGENT_PID; 13:04:45 echo Agent pid 2106 killed; 13:04:45 [ssh-agent] Stopped. 13:04:45 Robot results publisher started... 13:04:45 INFO: Checking test criticality is deprecated and will be dropped in a future release! 13:04:45 -Parsing output xml: 13:04:45 Done! 13:04:45 WARNING! Could not find file: **/log.html 13:04:45 WARNING! Could not find file: **/report.html 13:04:45 -Copying log files to build dir: 13:04:45 Done! 13:04:45 -Assigning results to build: 13:04:45 Done! 13:04:45 -Checking thresholds: 13:04:45 Done! 13:04:45 Done publishing Robot results. 13:04:45 [PostBuildScript] - [INFO] Executing post build scripts. 13:04:45 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins6646574979397311630.sh 13:04:45 ---> sysstat.sh 13:04:46 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10405576310601061959.sh 13:04:46 ---> package-listing.sh 13:04:46 ++ facter osfamily 13:04:46 ++ tr '[:upper:]' '[:lower:]' 13:04:46 + OS_FAMILY=debian 13:04:46 + workspace=/w/workspace/policy-pap-master-project-csit-pap 13:04:46 + START_PACKAGES=/tmp/packages_start.txt 13:04:46 + END_PACKAGES=/tmp/packages_end.txt 13:04:46 + DIFF_PACKAGES=/tmp/packages_diff.txt 13:04:46 + PACKAGES=/tmp/packages_start.txt 13:04:46 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 13:04:46 + PACKAGES=/tmp/packages_end.txt 13:04:46 + case "${OS_FAMILY}" in 13:04:46 + dpkg -l 13:04:46 + grep '^ii' 13:04:46 + '[' -f /tmp/packages_start.txt ']' 13:04:46 + '[' -f /tmp/packages_end.txt ']' 13:04:46 + diff /tmp/packages_start.txt /tmp/packages_end.txt 13:04:46 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 13:04:46 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 13:04:46 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 13:04:46 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2622797676438400426.sh 13:04:46 ---> capture-instance-metadata.sh 13:04:46 Setup pyenv: 13:04:46 system 13:04:46 3.8.13 13:04:46 3.9.13 13:04:46 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 13:04:46 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-OD2Q from file:/tmp/.os_lf_venv 13:04:48 lf-activate-venv(): INFO: Installing: lftools 13:04:58 lf-activate-venv(): INFO: Adding /tmp/venv-OD2Q/bin to PATH 13:04:58 INFO: Running in OpenStack, capturing instance metadata 13:04:58 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4032846562928630914.sh 13:04:58 provisioning config files... 13:04:58 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config18132814837674210033tmp 13:04:58 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 13:04:58 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 13:04:58 [EnvInject] - Injecting environment variables from a build step. 13:04:58 [EnvInject] - Injecting as environment variables the properties content 13:04:58 SERVER_ID=logs 13:04:58 13:04:58 [EnvInject] - Variables injected successfully. 13:04:58 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins8801347285545659732.sh 13:04:58 ---> create-netrc.sh 13:04:58 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins3103617603887947253.sh 13:04:58 ---> python-tools-install.sh 13:04:58 Setup pyenv: 13:04:58 system 13:04:58 3.8.13 13:04:58 3.9.13 13:04:58 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 13:04:58 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-OD2Q from file:/tmp/.os_lf_venv 13:05:00 lf-activate-venv(): INFO: Installing: lftools 13:05:09 lf-activate-venv(): INFO: Adding /tmp/venv-OD2Q/bin to PATH 13:05:09 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins3098136691854492339.sh 13:05:09 ---> sudo-logs.sh 13:05:09 Archiving 'sudo' log.. 13:05:09 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins11754396598307111570.sh 13:05:09 ---> job-cost.sh 13:05:09 Setup pyenv: 13:05:09 system 13:05:09 3.8.13 13:05:09 3.9.13 13:05:09 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 13:05:09 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-OD2Q from file:/tmp/.os_lf_venv 13:05:10 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 13:05:15 lf-activate-venv(): INFO: Adding /tmp/venv-OD2Q/bin to PATH 13:05:15 INFO: No Stack... 13:05:15 INFO: Retrieving Pricing Info for: v3-standard-8 13:05:16 INFO: Archiving Costs 13:05:16 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins18404084887615617803.sh 13:05:16 ---> logs-deploy.sh 13:05:16 Setup pyenv: 13:05:16 system 13:05:16 3.8.13 13:05:16 3.9.13 13:05:16 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 13:05:16 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-OD2Q from file:/tmp/.os_lf_venv 13:05:17 lf-activate-venv(): INFO: Installing: lftools 13:05:26 lf-activate-venv(): INFO: Adding /tmp/venv-OD2Q/bin to PATH 13:05:26 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1674 13:05:26 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 13:05:27 Archives upload complete. 13:05:27 INFO: archiving logs to Nexus 13:05:28 ---> uname -a: 13:05:28 Linux prd-ubuntu1804-docker-8c-8g-36937 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 13:05:28 13:05:28 13:05:28 ---> lscpu: 13:05:28 Architecture: x86_64 13:05:28 CPU op-mode(s): 32-bit, 64-bit 13:05:28 Byte Order: Little Endian 13:05:28 CPU(s): 8 13:05:28 On-line CPU(s) list: 0-7 13:05:28 Thread(s) per core: 1 13:05:28 Core(s) per socket: 1 13:05:28 Socket(s): 8 13:05:28 NUMA node(s): 1 13:05:28 Vendor ID: AuthenticAMD 13:05:28 CPU family: 23 13:05:28 Model: 49 13:05:28 Model name: AMD EPYC-Rome Processor 13:05:28 Stepping: 0 13:05:28 CPU MHz: 2800.000 13:05:28 BogoMIPS: 5600.00 13:05:28 Virtualization: AMD-V 13:05:28 Hypervisor vendor: KVM 13:05:28 Virtualization type: full 13:05:28 L1d cache: 32K 13:05:28 L1i cache: 32K 13:05:28 L2 cache: 512K 13:05:28 L3 cache: 16384K 13:05:28 NUMA node0 CPU(s): 0-7 13:05:28 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 13:05:28 13:05:28 13:05:28 ---> nproc: 13:05:28 8 13:05:28 13:05:28 13:05:28 ---> df -h: 13:05:28 Filesystem Size Used Avail Use% Mounted on 13:05:28 udev 16G 0 16G 0% /dev 13:05:28 tmpfs 3.2G 708K 3.2G 1% /run 13:05:28 /dev/vda1 155G 14G 142G 9% / 13:05:28 tmpfs 16G 0 16G 0% /dev/shm 13:05:28 tmpfs 5.0M 0 5.0M 0% /run/lock 13:05:28 tmpfs 16G 0 16G 0% /sys/fs/cgroup 13:05:28 /dev/vda15 105M 4.4M 100M 5% /boot/efi 13:05:28 tmpfs 3.2G 0 3.2G 0% /run/user/1001 13:05:28 13:05:28 13:05:28 ---> free -m: 13:05:28 total used free shared buff/cache available 13:05:28 Mem: 32167 824 25186 0 6156 30886 13:05:28 Swap: 1023 0 1023 13:05:28 13:05:28 13:05:28 ---> ip addr: 13:05:28 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 13:05:28 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 13:05:28 inet 127.0.0.1/8 scope host lo 13:05:28 valid_lft forever preferred_lft forever 13:05:28 inet6 ::1/128 scope host 13:05:28 valid_lft forever preferred_lft forever 13:05:28 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 13:05:28 link/ether fa:16:3e:af:28:22 brd ff:ff:ff:ff:ff:ff 13:05:28 inet 10.30.106.138/23 brd 10.30.107.255 scope global dynamic ens3 13:05:28 valid_lft 85933sec preferred_lft 85933sec 13:05:28 inet6 fe80::f816:3eff:feaf:2822/64 scope link 13:05:28 valid_lft forever preferred_lft forever 13:05:28 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 13:05:28 link/ether 02:42:40:36:38:3f brd ff:ff:ff:ff:ff:ff 13:05:28 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 13:05:28 valid_lft forever preferred_lft forever 13:05:28 13:05:28 13:05:28 ---> sar -b -r -n DEV: 13:05:28 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-36937) 05/02/24 _x86_64_ (8 CPU) 13:05:28 13:05:28 12:57:44 LINUX RESTART (8 CPU) 13:05:28 13:05:28 12:58:02 tps rtps wtps bread/s bwrtn/s 13:05:28 12:59:01 178.97 90.11 88.87 6462.46 61175.04 13:05:28 13:00:01 114.28 13.88 100.40 1127.84 30449.71 13:05:28 13:01:01 142.44 9.42 133.03 1667.86 61060.22 13:05:28 13:02:01 442.19 13.40 428.80 783.60 112478.99 13:05:28 13:03:01 30.14 0.37 29.78 31.33 23163.87 13:05:28 13:04:01 15.45 0.02 15.43 0.13 19294.07 13:05:28 13:05:01 67.71 1.40 66.31 111.85 21560.07 13:05:28 Average: 141.42 18.03 123.39 1431.16 46957.86 13:05:28 13:05:28 12:58:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 13:05:28 12:59:01 30165996 31680556 2773224 8.42 59152 1771560 1486560 4.37 900636 1591808 137840 13:05:28 13:00:01 29905840 31741388 3033380 9.21 84684 2048104 1384356 4.07 836024 1875300 144016 13:05:28 13:01:01 26860416 31693472 6078804 18.45 131596 4875352 1429572 4.21 984272 4612524 1673892 13:05:28 13:02:01 24578296 30457180 8360924 25.38 156160 5834588 7642768 22.49 2361976 5402904 340 13:05:28 13:03:01 23688992 29681052 9250228 28.08 157644 5942896 8738168 25.71 3185736 5457736 244 13:05:28 13:04:01 23669384 29662268 9269836 28.14 157804 5943480 8722004 25.66 3203612 5458236 460 13:05:28 13:05:01 25811672 31646904 7127548 21.64 158984 5802836 1530576 4.50 1264164 5319208 28596 13:05:28 Average: 26382942 30937546 6556278 19.90 129432 4602688 4419143 13.00 1819489 4245388 283627 13:05:28 13:05:28 12:58:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 13:05:28 12:59:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:05:28 12:59:01 ens3 370.61 255.64 1549.08 64.64 0.00 0.00 0.00 0.00 13:05:28 12:59:01 lo 1.93 1.93 0.20 0.20 0.00 0.00 0.00 0.00 13:05:28 13:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:05:28 13:00:01 ens3 48.71 37.43 690.00 7.54 0.00 0.00 0.00 0.00 13:05:28 13:00:01 lo 1.60 1.60 0.17 0.17 0.00 0.00 0.00 0.00 13:05:28 13:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:05:28 13:01:01 ens3 899.78 460.94 20844.21 34.53 0.00 0.00 0.00 0.00 13:05:28 13:01:01 br-444bf212ed34 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:05:28 13:01:01 lo 10.40 10.40 1.02 1.02 0.00 0.00 0.00 0.00 13:05:28 13:02:01 veth2f8d186 1.85 1.87 0.18 0.18 0.00 0.00 0.00 0.00 13:05:28 13:02:01 veth3bf4078 0.00 0.42 0.00 0.03 0.00 0.00 0.00 0.00 13:05:28 13:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:05:28 13:02:01 vethe383dc5 17.10 16.63 8.94 8.77 0.00 0.00 0.00 0.00 13:05:28 13:03:01 veth2f8d186 17.11 14.40 2.08 2.16 0.00 0.00 0.00 0.00 13:05:28 13:03:01 veth3bf4078 0.00 0.08 0.00 0.00 0.00 0.00 0.00 0.00 13:05:28 13:03:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:05:28 13:03:01 vethe383dc5 28.86 23.35 8.38 31.12 0.00 0.00 0.00 0.00 13:05:28 13:04:01 veth2f8d186 13.93 9.38 1.06 1.34 0.00 0.00 0.00 0.00 13:05:28 13:04:01 veth3bf4078 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 13:05:28 13:04:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:05:28 13:04:01 vethe383dc5 0.32 0.35 0.58 0.03 0.00 0.00 0.00 0.00 13:05:28 13:05:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:05:28 13:05:01 ens3 1691.70 970.34 33083.91 150.12 0.00 0.00 0.00 0.00 13:05:28 13:05:01 lo 34.83 34.83 6.22 6.22 0.00 0.00 0.00 0.00 13:05:28 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:05:28 Average: ens3 241.86 138.41 4747.75 21.45 0.00 0.00 0.00 0.00 13:05:28 Average: lo 4.57 4.57 0.86 0.86 0.00 0.00 0.00 0.00 13:05:28 13:05:28 13:05:28 ---> sar -P ALL: 13:05:28 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-36937) 05/02/24 _x86_64_ (8 CPU) 13:05:28 13:05:28 12:57:44 LINUX RESTART (8 CPU) 13:05:28 13:05:28 12:58:02 CPU %user %nice %system %iowait %steal %idle 13:05:28 12:59:01 all 9.96 0.00 1.32 4.38 0.04 84.30 13:05:28 12:59:01 0 3.58 0.00 1.11 0.81 0.05 94.45 13:05:28 12:59:01 1 7.99 0.00 1.42 4.04 0.03 86.52 13:05:28 12:59:01 2 8.47 0.00 0.90 0.53 0.02 90.08 13:05:28 12:59:01 3 26.01 0.00 1.66 2.50 0.07 69.75 13:05:28 12:59:01 4 17.54 0.00 1.31 0.86 0.03 80.25 13:05:28 12:59:01 5 7.18 0.00 0.74 4.47 0.03 87.58 13:05:28 12:59:01 6 4.86 0.00 0.97 1.59 0.03 92.55 13:05:28 12:59:01 7 4.01 0.00 2.46 20.25 0.03 73.24 13:05:28 13:00:01 all 10.45 0.00 0.70 2.47 0.03 86.35 13:05:28 13:00:01 0 1.52 0.00 0.55 12.28 0.07 85.58 13:05:28 13:00:01 1 2.24 0.00 0.30 0.08 0.00 97.37 13:05:28 13:00:01 2 21.46 0.00 0.97 1.13 0.03 76.40 13:05:28 13:00:01 3 17.60 0.00 1.05 0.70 0.03 80.62 13:05:28 13:00:01 4 5.67 0.00 0.62 3.35 0.02 90.35 13:05:28 13:00:01 5 27.88 0.00 1.22 1.58 0.05 69.27 13:05:28 13:00:01 6 3.33 0.00 0.48 0.27 0.02 95.90 13:05:28 13:00:01 7 3.88 0.00 0.43 0.38 0.02 95.28 13:05:28 13:01:01 all 11.50 0.00 4.44 6.29 0.27 77.50 13:05:28 13:01:01 0 11.00 0.00 4.63 9.14 0.19 75.03 13:05:28 13:01:01 1 11.47 0.00 4.03 10.16 0.08 74.26 13:05:28 13:01:01 2 13.15 0.00 4.84 13.23 0.62 68.16 13:05:28 13:01:01 3 10.95 0.00 3.79 0.74 0.35 84.16 13:05:28 13:01:01 4 11.77 0.00 5.16 5.29 0.08 77.70 13:05:28 13:01:01 5 10.17 0.00 3.59 0.14 0.07 86.03 13:05:28 13:01:01 6 8.30 0.00 4.84 0.74 0.62 85.50 13:05:28 13:01:01 7 15.19 0.00 4.68 10.92 0.07 69.15 13:05:28 13:02:01 all 17.33 0.00 4.06 7.37 0.07 71.16 13:05:28 13:02:01 0 14.26 0.00 4.08 3.39 0.07 78.20 13:05:28 13:02:01 1 15.87 0.00 3.82 18.38 0.08 61.84 13:05:28 13:02:01 2 20.14 0.00 5.06 1.21 0.08 73.50 13:05:28 13:02:01 3 15.70 0.00 3.77 1.51 0.07 78.96 13:05:28 13:02:01 4 19.38 0.00 4.21 23.42 0.08 52.90 13:05:28 13:02:01 5 18.15 0.00 4.29 5.45 0.08 72.02 13:05:28 13:02:01 6 17.01 0.00 3.73 2.21 0.07 76.99 13:05:28 13:02:01 7 18.20 0.00 3.48 3.53 0.08 74.71 13:05:28 13:03:01 all 19.67 0.00 1.78 0.98 0.07 77.50 13:05:28 13:03:01 0 17.53 0.00 1.54 0.02 0.07 80.84 13:05:28 13:03:01 1 14.85 0.00 1.34 0.07 0.05 83.69 13:05:28 13:03:01 2 13.84 0.00 1.35 0.08 0.05 84.67 13:05:28 13:03:01 3 26.88 0.00 2.33 0.00 0.10 70.69 13:05:28 13:03:01 4 19.78 0.00 1.86 0.02 0.08 78.26 13:05:28 13:03:01 5 19.06 0.00 2.00 0.70 0.07 78.17 13:05:28 13:03:01 6 20.02 0.00 1.76 6.91 0.08 71.22 13:05:28 13:03:01 7 25.37 0.00 2.11 0.03 0.07 72.41 13:05:28 13:04:01 all 1.28 0.00 0.19 1.13 0.04 97.36 13:05:28 13:04:01 0 0.98 0.00 0.18 0.00 0.03 98.80 13:05:28 13:04:01 1 1.32 0.00 0.22 0.00 0.03 98.43 13:05:28 13:04:01 2 1.17 0.00 0.13 0.00 0.03 98.66 13:05:28 13:04:01 3 1.45 0.00 0.22 0.02 0.03 98.28 13:05:28 13:04:01 4 1.32 0.00 0.20 0.00 0.03 98.45 13:05:28 13:04:01 5 2.04 0.00 0.20 0.20 0.05 97.51 13:05:28 13:04:01 6 0.48 0.00 0.10 8.85 0.02 90.55 13:05:28 13:04:01 7 1.48 0.00 0.25 0.00 0.07 98.20 13:05:28 13:05:01 all 4.53 0.00 0.76 1.48 0.04 93.19 13:05:28 13:05:01 0 2.24 0.00 0.53 6.01 0.03 91.19 13:05:28 13:05:01 1 18.46 0.00 1.05 0.43 0.05 80.00 13:05:28 13:05:01 2 2.14 0.00 0.75 0.12 0.02 96.98 13:05:28 13:05:01 3 2.03 0.00 0.85 0.80 0.03 96.28 13:05:28 13:05:01 4 4.25 0.00 0.69 0.05 0.03 94.98 13:05:28 13:05:01 5 3.44 0.00 0.69 0.77 0.05 95.05 13:05:28 13:05:01 6 1.55 0.00 0.67 3.09 0.03 94.66 13:05:28 13:05:01 7 2.16 0.00 0.79 0.59 0.05 96.42 13:05:28 Average: all 10.67 0.00 1.89 3.43 0.08 83.94 13:05:28 Average: 0 7.28 0.00 1.79 4.52 0.07 86.34 13:05:28 Average: 1 10.31 0.00 1.73 4.71 0.05 83.21 13:05:28 Average: 2 11.48 0.00 2.00 2.33 0.12 84.06 13:05:28 Average: 3 14.31 0.00 1.95 0.89 0.10 82.75 13:05:28 Average: 4 11.34 0.00 2.00 4.70 0.05 81.90 13:05:28 Average: 5 12.59 0.00 1.82 1.89 0.06 83.65 13:05:28 Average: 6 7.93 0.00 1.79 3.39 0.12 86.76 13:05:28 Average: 7 10.05 0.00 2.02 5.01 0.06 82.87 13:05:28 13:05:28 13:05:28