08:54:54 Started by upstream project "policy-docker-master-merge-java" build number 349 08:54:54 originally caused by: 08:54:54 Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/137725 08:54:54 Running as SYSTEM 08:54:54 [EnvInject] - Loading node environment variables. 08:54:54 Building remotely on prd-ubuntu1804-docker-8c-8g-25485 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 08:54:54 [ssh-agent] Looking for ssh-agent implementation... 08:54:54 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 08:54:54 $ ssh-agent 08:54:54 SSH_AUTH_SOCK=/tmp/ssh-rJyZ08sy0nxh/agent.2085 08:54:54 SSH_AGENT_PID=2087 08:54:54 [ssh-agent] Started. 08:54:54 Running ssh-add (command line suppressed) 08:54:54 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_8817876947068470545.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_8817876947068470545.key) 08:54:54 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 08:54:54 The recommended git tool is: NONE 08:54:56 using credential onap-jenkins-ssh 08:54:56 Wiping out workspace first. 08:54:56 Cloning the remote Git repository 08:54:56 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 08:54:56 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 08:54:56 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 08:54:56 > git --version # timeout=10 08:54:56 > git --version # 'git version 2.17.1' 08:54:56 using GIT_SSH to set credentials Gerrit user 08:54:56 Verifying host key using manually-configured host key entries 08:54:56 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 08:54:56 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 08:54:56 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 08:54:57 Avoid second fetch 08:54:57 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 08:54:57 Checking out Revision 427f193118436b2aa7664f72fcb16ca1b25b8061 (refs/remotes/origin/master) 08:54:57 > git config core.sparsecheckout # timeout=10 08:54:57 > git checkout -f 427f193118436b2aa7664f72fcb16ca1b25b8061 # timeout=30 08:54:57 Commit message: "Merge "Add Participant Simulator chart"" 08:54:57 > git rev-list --no-walk deb0e121d5b4b9bd68334c2565aae21d8eed0d21 # timeout=10 08:54:57 provisioning config files... 08:54:57 copy managed file [npmrc] to file:/home/jenkins/.npmrc 08:54:57 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 08:54:57 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins14142152991202834662.sh 08:54:57 ---> python-tools-install.sh 08:54:57 Setup pyenv: 08:54:57 * system (set by /opt/pyenv/version) 08:54:57 * 3.8.13 (set by /opt/pyenv/version) 08:54:57 * 3.9.13 (set by /opt/pyenv/version) 08:54:57 * 3.10.6 (set by /opt/pyenv/version) 08:55:02 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-oy3i 08:55:02 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 08:55:05 lf-activate-venv(): INFO: Installing: lftools 08:55:41 lf-activate-venv(): INFO: Adding /tmp/venv-oy3i/bin to PATH 08:55:41 Generating Requirements File 08:56:08 Python 3.10.6 08:56:09 pip 24.0 from /tmp/venv-oy3i/lib/python3.10/site-packages/pip (python 3.10) 08:56:09 appdirs==1.4.4 08:56:09 argcomplete==3.3.0 08:56:09 aspy.yaml==1.3.0 08:56:09 attrs==23.2.0 08:56:09 autopage==0.5.2 08:56:09 beautifulsoup4==4.12.3 08:56:09 boto3==1.34.90 08:56:09 botocore==1.34.90 08:56:09 bs4==0.0.2 08:56:09 cachetools==5.3.3 08:56:09 certifi==2024.2.2 08:56:09 cffi==1.16.0 08:56:09 cfgv==3.4.0 08:56:09 chardet==5.2.0 08:56:09 charset-normalizer==3.3.2 08:56:09 click==8.1.7 08:56:09 cliff==4.6.0 08:56:09 cmd2==2.4.3 08:56:09 cryptography==3.3.2 08:56:09 debtcollector==3.0.0 08:56:09 decorator==5.1.1 08:56:09 defusedxml==0.7.1 08:56:09 Deprecated==1.2.14 08:56:09 distlib==0.3.8 08:56:09 dnspython==2.6.1 08:56:09 docker==4.2.2 08:56:09 dogpile.cache==1.3.2 08:56:09 email_validator==2.1.1 08:56:09 filelock==3.13.4 08:56:09 future==1.0.0 08:56:09 gitdb==4.0.11 08:56:09 GitPython==3.1.43 08:56:09 google-auth==2.29.0 08:56:09 httplib2==0.22.0 08:56:09 identify==2.5.36 08:56:09 idna==3.7 08:56:09 importlib-resources==1.5.0 08:56:09 iso8601==2.1.0 08:56:09 Jinja2==3.1.3 08:56:09 jmespath==1.0.1 08:56:09 jsonpatch==1.33 08:56:09 jsonpointer==2.4 08:56:09 jsonschema==4.21.1 08:56:09 jsonschema-specifications==2023.12.1 08:56:09 keystoneauth1==5.6.0 08:56:09 kubernetes==29.0.0 08:56:09 lftools==0.37.10 08:56:09 lxml==5.2.1 08:56:09 MarkupSafe==2.1.5 08:56:09 msgpack==1.0.8 08:56:09 multi_key_dict==2.0.3 08:56:09 munch==4.0.0 08:56:09 netaddr==1.2.1 08:56:09 netifaces==0.11.0 08:56:09 niet==1.4.2 08:56:09 nodeenv==1.8.0 08:56:09 oauth2client==4.1.3 08:56:09 oauthlib==3.2.2 08:56:09 openstacksdk==3.1.0 08:56:09 os-client-config==2.1.0 08:56:09 os-service-types==1.7.0 08:56:09 osc-lib==3.0.1 08:56:09 oslo.config==9.4.0 08:56:09 oslo.context==5.5.0 08:56:09 oslo.i18n==6.3.0 08:56:09 oslo.log==5.5.1 08:56:09 oslo.serialization==5.4.0 08:56:09 oslo.utils==7.1.0 08:56:09 packaging==24.0 08:56:09 pbr==6.0.0 08:56:09 platformdirs==4.2.1 08:56:09 prettytable==3.10.0 08:56:09 pyasn1==0.6.0 08:56:09 pyasn1_modules==0.4.0 08:56:09 pycparser==2.22 08:56:09 pygerrit2==2.0.15 08:56:09 PyGithub==2.3.0 08:56:09 pyinotify==0.9.6 08:56:09 PyJWT==2.8.0 08:56:09 PyNaCl==1.5.0 08:56:09 pyparsing==2.4.7 08:56:09 pyperclip==1.8.2 08:56:09 pyrsistent==0.20.0 08:56:09 python-cinderclient==9.5.0 08:56:09 python-dateutil==2.9.0.post0 08:56:09 python-heatclient==3.5.0 08:56:09 python-jenkins==1.8.2 08:56:09 python-keystoneclient==5.4.0 08:56:09 python-magnumclient==4.4.0 08:56:09 python-novaclient==18.6.0 08:56:09 python-openstackclient==6.6.0 08:56:09 python-swiftclient==4.5.0 08:56:09 PyYAML==6.0.1 08:56:09 referencing==0.35.0 08:56:09 requests==2.31.0 08:56:09 requests-oauthlib==2.0.0 08:56:09 requestsexceptions==1.4.0 08:56:09 rfc3986==2.0.0 08:56:09 rpds-py==0.18.0 08:56:09 rsa==4.9 08:56:09 ruamel.yaml==0.18.6 08:56:09 ruamel.yaml.clib==0.2.8 08:56:09 s3transfer==0.10.1 08:56:09 simplejson==3.19.2 08:56:09 six==1.16.0 08:56:09 smmap==5.0.1 08:56:09 soupsieve==2.5 08:56:09 stevedore==5.2.0 08:56:09 tabulate==0.9.0 08:56:09 toml==0.10.2 08:56:09 tomlkit==0.12.4 08:56:09 tqdm==4.66.2 08:56:09 typing_extensions==4.11.0 08:56:09 tzdata==2024.1 08:56:09 urllib3==1.26.18 08:56:09 virtualenv==20.26.0 08:56:09 wcwidth==0.2.13 08:56:09 websocket-client==1.8.0 08:56:09 wrapt==1.16.0 08:56:09 xdg==6.0.0 08:56:09 xmltodict==0.13.0 08:56:09 yq==3.4.1 08:56:09 [EnvInject] - Injecting environment variables from a build step. 08:56:09 [EnvInject] - Injecting as environment variables the properties content 08:56:09 SET_JDK_VERSION=openjdk17 08:56:09 GIT_URL="git://cloud.onap.org/mirror" 08:56:09 08:56:09 [EnvInject] - Variables injected successfully. 08:56:09 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins15207724289940858301.sh 08:56:09 ---> update-java-alternatives.sh 08:56:09 ---> Updating Java version 08:56:10 ---> Ubuntu/Debian system detected 08:56:10 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 08:56:10 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 08:56:10 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 08:56:10 openjdk version "17.0.4" 2022-07-19 08:56:10 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 08:56:10 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 08:56:10 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 08:56:10 [EnvInject] - Injecting environment variables from a build step. 08:56:10 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 08:56:10 [EnvInject] - Variables injected successfully. 08:56:10 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins13317469723595768265.sh 08:56:10 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 08:56:10 + set +u 08:56:10 + save_set 08:56:10 + RUN_CSIT_SAVE_SET=ehxB 08:56:10 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 08:56:10 + '[' 1 -eq 0 ']' 08:56:10 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 08:56:10 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 08:56:10 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 08:56:10 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 08:56:10 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 08:56:10 + export ROBOT_VARIABLES= 08:56:10 + ROBOT_VARIABLES= 08:56:10 + export PROJECT=pap 08:56:10 + PROJECT=pap 08:56:10 + cd /w/workspace/policy-pap-master-project-csit-pap 08:56:10 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 08:56:10 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 08:56:10 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 08:56:10 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 08:56:10 + relax_set 08:56:10 + set +e 08:56:10 + set +o pipefail 08:56:10 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 08:56:10 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 08:56:10 +++ mktemp -d 08:56:10 ++ ROBOT_VENV=/tmp/tmp.8RCxNEnqm6 08:56:10 ++ echo ROBOT_VENV=/tmp/tmp.8RCxNEnqm6 08:56:10 +++ python3 --version 08:56:10 ++ echo 'Python version is: Python 3.6.9' 08:56:10 Python version is: Python 3.6.9 08:56:10 ++ python3 -m venv --clear /tmp/tmp.8RCxNEnqm6 08:56:11 ++ source /tmp/tmp.8RCxNEnqm6/bin/activate 08:56:11 +++ deactivate nondestructive 08:56:11 +++ '[' -n '' ']' 08:56:11 +++ '[' -n '' ']' 08:56:11 +++ '[' -n /bin/bash -o -n '' ']' 08:56:11 +++ hash -r 08:56:11 +++ '[' -n '' ']' 08:56:11 +++ unset VIRTUAL_ENV 08:56:11 +++ '[' '!' nondestructive = nondestructive ']' 08:56:11 +++ VIRTUAL_ENV=/tmp/tmp.8RCxNEnqm6 08:56:11 +++ export VIRTUAL_ENV 08:56:11 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 08:56:11 +++ PATH=/tmp/tmp.8RCxNEnqm6/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 08:56:11 +++ export PATH 08:56:11 +++ '[' -n '' ']' 08:56:11 +++ '[' -z '' ']' 08:56:11 +++ _OLD_VIRTUAL_PS1= 08:56:11 +++ '[' 'x(tmp.8RCxNEnqm6) ' '!=' x ']' 08:56:11 +++ PS1='(tmp.8RCxNEnqm6) ' 08:56:11 +++ export PS1 08:56:11 +++ '[' -n /bin/bash -o -n '' ']' 08:56:11 +++ hash -r 08:56:11 ++ set -exu 08:56:11 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 08:56:15 ++ echo 'Installing Python Requirements' 08:56:15 Installing Python Requirements 08:56:15 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 08:56:33 ++ python3 -m pip -qq freeze 08:56:33 bcrypt==4.0.1 08:56:33 beautifulsoup4==4.12.3 08:56:33 bitarray==2.9.2 08:56:33 certifi==2024.2.2 08:56:33 cffi==1.15.1 08:56:33 charset-normalizer==2.0.12 08:56:33 cryptography==40.0.2 08:56:33 decorator==5.1.1 08:56:33 elasticsearch==7.17.9 08:56:33 elasticsearch-dsl==7.4.1 08:56:33 enum34==1.1.10 08:56:33 idna==3.7 08:56:33 importlib-resources==5.4.0 08:56:33 ipaddr==2.2.0 08:56:33 isodate==0.6.1 08:56:33 jmespath==0.10.0 08:56:33 jsonpatch==1.32 08:56:33 jsonpath-rw==1.4.0 08:56:33 jsonpointer==2.3 08:56:33 lxml==5.2.1 08:56:33 netaddr==0.8.0 08:56:33 netifaces==0.11.0 08:56:33 odltools==0.1.28 08:56:33 paramiko==3.4.0 08:56:33 pkg_resources==0.0.0 08:56:33 ply==3.11 08:56:33 pyang==2.6.0 08:56:33 pyangbind==0.8.1 08:56:33 pycparser==2.21 08:56:33 pyhocon==0.3.60 08:56:33 PyNaCl==1.5.0 08:56:33 pyparsing==3.1.2 08:56:33 python-dateutil==2.9.0.post0 08:56:33 regex==2023.8.8 08:56:33 requests==2.27.1 08:56:33 robotframework==6.1.1 08:56:33 robotframework-httplibrary==0.4.2 08:56:33 robotframework-pythonlibcore==3.0.0 08:56:33 robotframework-requests==0.9.4 08:56:33 robotframework-selenium2library==3.0.0 08:56:33 robotframework-seleniumlibrary==5.1.3 08:56:33 robotframework-sshlibrary==3.8.0 08:56:33 scapy==2.5.0 08:56:33 scp==0.14.5 08:56:33 selenium==3.141.0 08:56:33 six==1.16.0 08:56:33 soupsieve==2.3.2.post1 08:56:33 urllib3==1.26.18 08:56:33 waitress==2.0.0 08:56:33 WebOb==1.8.7 08:56:33 WebTest==3.0.0 08:56:33 zipp==3.6.0 08:56:33 ++ mkdir -p /tmp/tmp.8RCxNEnqm6/src/onap 08:56:33 ++ rm -rf /tmp/tmp.8RCxNEnqm6/src/onap/testsuite 08:56:33 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 08:56:39 ++ echo 'Installing python confluent-kafka library' 08:56:39 Installing python confluent-kafka library 08:56:39 ++ python3 -m pip install -qq confluent-kafka 08:56:40 ++ echo 'Uninstall docker-py and reinstall docker.' 08:56:40 Uninstall docker-py and reinstall docker. 08:56:40 ++ python3 -m pip uninstall -y -qq docker 08:56:41 ++ python3 -m pip install -U -qq docker 08:56:42 ++ python3 -m pip -qq freeze 08:56:42 bcrypt==4.0.1 08:56:42 beautifulsoup4==4.12.3 08:56:42 bitarray==2.9.2 08:56:42 certifi==2024.2.2 08:56:42 cffi==1.15.1 08:56:42 charset-normalizer==2.0.12 08:56:42 confluent-kafka==2.3.0 08:56:42 cryptography==40.0.2 08:56:42 decorator==5.1.1 08:56:42 deepdiff==5.7.0 08:56:42 dnspython==2.2.1 08:56:42 docker==5.0.3 08:56:42 elasticsearch==7.17.9 08:56:42 elasticsearch-dsl==7.4.1 08:56:42 enum34==1.1.10 08:56:42 future==1.0.0 08:56:42 idna==3.7 08:56:42 importlib-resources==5.4.0 08:56:42 ipaddr==2.2.0 08:56:42 isodate==0.6.1 08:56:42 Jinja2==3.0.3 08:56:42 jmespath==0.10.0 08:56:42 jsonpatch==1.32 08:56:42 jsonpath-rw==1.4.0 08:56:42 jsonpointer==2.3 08:56:42 kafka-python==2.0.2 08:56:42 lxml==5.2.1 08:56:42 MarkupSafe==2.0.1 08:56:42 more-itertools==5.0.0 08:56:42 netaddr==0.8.0 08:56:42 netifaces==0.11.0 08:56:42 odltools==0.1.28 08:56:42 ordered-set==4.0.2 08:56:42 paramiko==3.4.0 08:56:42 pbr==6.0.0 08:56:42 pkg_resources==0.0.0 08:56:42 ply==3.11 08:56:42 protobuf==3.19.6 08:56:42 pyang==2.6.0 08:56:42 pyangbind==0.8.1 08:56:42 pycparser==2.21 08:56:42 pyhocon==0.3.60 08:56:42 PyNaCl==1.5.0 08:56:42 pyparsing==3.1.2 08:56:42 python-dateutil==2.9.0.post0 08:56:42 PyYAML==6.0.1 08:56:42 regex==2023.8.8 08:56:42 requests==2.27.1 08:56:42 robotframework==6.1.1 08:56:42 robotframework-httplibrary==0.4.2 08:56:42 robotframework-onap==0.6.0.dev105 08:56:42 robotframework-pythonlibcore==3.0.0 08:56:42 robotframework-requests==0.9.4 08:56:42 robotframework-selenium2library==3.0.0 08:56:42 robotframework-seleniumlibrary==5.1.3 08:56:42 robotframework-sshlibrary==3.8.0 08:56:42 robotlibcore-temp==1.0.2 08:56:42 scapy==2.5.0 08:56:42 scp==0.14.5 08:56:42 selenium==3.141.0 08:56:42 six==1.16.0 08:56:42 soupsieve==2.3.2.post1 08:56:42 urllib3==1.26.18 08:56:42 waitress==2.0.0 08:56:42 WebOb==1.8.7 08:56:42 websocket-client==1.3.1 08:56:42 WebTest==3.0.0 08:56:42 zipp==3.6.0 08:56:42 ++ uname 08:56:42 ++ grep -q Linux 08:56:42 ++ sudo apt-get -y -qq install libxml2-utils 08:56:42 + load_set 08:56:42 + _setopts=ehuxB 08:56:42 ++ tr : ' ' 08:56:42 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 08:56:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:56:42 + set +o braceexpand 08:56:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:56:42 + set +o hashall 08:56:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:56:42 + set +o interactive-comments 08:56:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:56:42 + set +o nounset 08:56:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:56:42 + set +o xtrace 08:56:42 ++ echo ehuxB 08:56:42 ++ sed 's/./& /g' 08:56:42 + for i in $(echo "$_setopts" | sed 's/./& /g') 08:56:42 + set +e 08:56:42 + for i in $(echo "$_setopts" | sed 's/./& /g') 08:56:42 + set +h 08:56:42 + for i in $(echo "$_setopts" | sed 's/./& /g') 08:56:42 + set +u 08:56:42 + for i in $(echo "$_setopts" | sed 's/./& /g') 08:56:42 + set +x 08:56:42 + source_safely /tmp/tmp.8RCxNEnqm6/bin/activate 08:56:42 + '[' -z /tmp/tmp.8RCxNEnqm6/bin/activate ']' 08:56:42 + relax_set 08:56:42 + set +e 08:56:42 + set +o pipefail 08:56:42 + . /tmp/tmp.8RCxNEnqm6/bin/activate 08:56:42 ++ deactivate nondestructive 08:56:42 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 08:56:42 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 08:56:42 ++ export PATH 08:56:42 ++ unset _OLD_VIRTUAL_PATH 08:56:42 ++ '[' -n '' ']' 08:56:42 ++ '[' -n /bin/bash -o -n '' ']' 08:56:42 ++ hash -r 08:56:42 ++ '[' -n '' ']' 08:56:42 ++ unset VIRTUAL_ENV 08:56:42 ++ '[' '!' nondestructive = nondestructive ']' 08:56:42 ++ VIRTUAL_ENV=/tmp/tmp.8RCxNEnqm6 08:56:42 ++ export VIRTUAL_ENV 08:56:42 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 08:56:42 ++ PATH=/tmp/tmp.8RCxNEnqm6/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 08:56:42 ++ export PATH 08:56:42 ++ '[' -n '' ']' 08:56:42 ++ '[' -z '' ']' 08:56:42 ++ _OLD_VIRTUAL_PS1='(tmp.8RCxNEnqm6) ' 08:56:42 ++ '[' 'x(tmp.8RCxNEnqm6) ' '!=' x ']' 08:56:42 ++ PS1='(tmp.8RCxNEnqm6) (tmp.8RCxNEnqm6) ' 08:56:42 ++ export PS1 08:56:42 ++ '[' -n /bin/bash -o -n '' ']' 08:56:42 ++ hash -r 08:56:42 + load_set 08:56:42 + _setopts=hxB 08:56:42 ++ echo braceexpand:hashall:interactive-comments:xtrace 08:56:42 ++ tr : ' ' 08:56:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:56:42 + set +o braceexpand 08:56:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:56:42 + set +o hashall 08:56:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:56:42 + set +o interactive-comments 08:56:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:56:42 + set +o xtrace 08:56:42 ++ echo hxB 08:56:42 ++ sed 's/./& /g' 08:56:42 + for i in $(echo "$_setopts" | sed 's/./& /g') 08:56:42 + set +h 08:56:42 + for i in $(echo "$_setopts" | sed 's/./& /g') 08:56:42 + set +x 08:56:42 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 08:56:42 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 08:56:42 + export TEST_OPTIONS= 08:56:42 + TEST_OPTIONS= 08:56:42 ++ mktemp -d 08:56:42 + WORKDIR=/tmp/tmp.9nztubu5q5 08:56:42 + cd /tmp/tmp.9nztubu5q5 08:56:42 + docker login -u docker -p docker nexus3.onap.org:10001 08:56:43 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 08:56:43 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 08:56:43 Configure a credential helper to remove this warning. See 08:56:43 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 08:56:43 08:56:43 Login Succeeded 08:56:43 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 08:56:43 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 08:56:43 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 08:56:43 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 08:56:43 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 08:56:43 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 08:56:43 + relax_set 08:56:43 + set +e 08:56:43 + set +o pipefail 08:56:43 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 08:56:43 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 08:56:43 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 08:56:43 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 08:56:43 +++ GERRIT_BRANCH=master 08:56:43 +++ echo GERRIT_BRANCH=master 08:56:43 GERRIT_BRANCH=master 08:56:43 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 08:56:43 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 08:56:43 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 08:56:43 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 08:56:44 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 08:56:44 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 08:56:44 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 08:56:44 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 08:56:44 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 08:56:44 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 08:56:44 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 08:56:44 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 08:56:44 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 08:56:44 +++ grafana=false 08:56:44 +++ gui=false 08:56:44 +++ [[ 2 -gt 0 ]] 08:56:44 +++ key=apex-pdp 08:56:44 +++ case $key in 08:56:44 +++ echo apex-pdp 08:56:44 apex-pdp 08:56:44 +++ component=apex-pdp 08:56:44 +++ shift 08:56:44 +++ [[ 1 -gt 0 ]] 08:56:44 +++ key=--grafana 08:56:44 +++ case $key in 08:56:44 +++ grafana=true 08:56:44 +++ shift 08:56:44 +++ [[ 0 -gt 0 ]] 08:56:44 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 08:56:44 +++ echo 'Configuring docker compose...' 08:56:44 Configuring docker compose... 08:56:44 +++ source export-ports.sh 08:56:44 +++ source get-versions.sh 08:56:46 +++ '[' -z pap ']' 08:56:46 +++ '[' -n apex-pdp ']' 08:56:46 +++ '[' apex-pdp == logs ']' 08:56:46 +++ '[' true = true ']' 08:56:46 +++ echo 'Starting apex-pdp application with Grafana' 08:56:46 Starting apex-pdp application with Grafana 08:56:46 +++ docker-compose up -d apex-pdp grafana 08:56:47 Creating network "compose_default" with the default driver 08:56:47 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 08:56:48 latest: Pulling from prom/prometheus 08:56:51 Digest: sha256:4f6c47e39a9064028766e8c95890ed15690c30f00c4ba14e7ce6ae1ded0295b1 08:56:51 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 08:56:51 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 08:56:51 latest: Pulling from grafana/grafana 08:56:56 Digest: sha256:7d5faae481a4c6f436c99e98af11534f7fd5e8d3e35213552dd1dd02bc393d2e 08:56:56 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 08:56:56 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 08:56:56 10.10.2: Pulling from mariadb 08:57:01 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 08:57:01 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 08:57:01 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 08:57:01 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator 08:57:05 Digest: sha256:d8f1d8ae67fc0b53114a44577cb43c90a3a3281908d2f2418d7fbd203413bd6a 08:57:05 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT 08:57:05 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 08:57:06 latest: Pulling from confluentinc/cp-zookeeper 08:57:16 Digest: sha256:4dc780642bfc5ec3a2d4901e2ff1f9ddef7f7c5c0b793e1e2911cbfb4e3a3214 08:57:16 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 08:57:16 Pulling kafka (confluentinc/cp-kafka:latest)... 08:57:17 latest: Pulling from confluentinc/cp-kafka 08:57:19 Digest: sha256:620734d9fc0bb1f9886932e5baf33806074469f40e3fe246a3fdbb59309535fa 08:57:19 Status: Downloaded newer image for confluentinc/cp-kafka:latest 08:57:19 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 08:57:19 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator 08:57:23 Digest: sha256:59f0448c5bbe494c6652e1913320d9fe99024bcaef51f510204d55770b94ba9d 08:57:23 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT 08:57:23 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 08:57:24 3.1.2-SNAPSHOT: Pulling from onap/policy-api 08:57:25 Digest: sha256:0e8cbccfee833c5b2be68d71dd51902b884e77df24bbbac2751693f58bdc20ce 08:57:25 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT 08:57:25 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 08:57:26 3.1.2-SNAPSHOT: Pulling from onap/policy-pap 08:57:44 Digest: sha256:4424490684da433df5069c1f1dbbafe83fffd4c8b6a174807fb10d6443ecef06 08:57:45 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT 08:57:45 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 08:57:47 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp 08:58:00 Digest: sha256:75a74a87b7345e553563fbe2ececcd2285ed9500fd91489d9968ae81123b9982 08:58:00 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT 08:58:00 Creating zookeeper ... 08:58:00 Creating prometheus ... 08:58:00 Creating simulator ... 08:58:00 Creating mariadb ... 08:58:10 Creating prometheus ... done 08:58:10 Creating grafana ... 08:58:11 Creating grafana ... done 08:58:11 Creating mariadb ... done 08:58:11 Creating policy-db-migrator ... 08:58:12 Creating policy-db-migrator ... done 08:58:12 Creating policy-api ... 08:58:13 Creating simulator ... done 08:58:15 Creating zookeeper ... done 08:58:15 Creating kafka ... 08:58:16 Creating policy-api ... done 08:58:17 Creating kafka ... done 08:58:17 Creating policy-pap ... 08:58:18 Creating policy-pap ... done 08:58:18 Creating policy-apex-pdp ... 08:58:19 Creating policy-apex-pdp ... done 08:58:19 +++ echo 'Prometheus server: http://localhost:30259' 08:58:19 Prometheus server: http://localhost:30259 08:58:19 +++ echo 'Grafana server: http://localhost:30269' 08:58:19 Grafana server: http://localhost:30269 08:58:19 +++ cd /w/workspace/policy-pap-master-project-csit-pap 08:58:19 ++ sleep 10 08:58:29 ++ unset http_proxy https_proxy 08:58:29 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 08:58:29 Waiting for REST to come up on localhost port 30003... 08:58:29 NAMES STATUS 08:58:29 policy-apex-pdp Up 10 seconds 08:58:29 policy-pap Up 11 seconds 08:58:29 kafka Up 12 seconds 08:58:29 policy-api Up 13 seconds 08:58:29 grafana Up 17 seconds 08:58:29 simulator Up 15 seconds 08:58:29 mariadb Up 17 seconds 08:58:29 prometheus Up 18 seconds 08:58:29 zookeeper Up 14 seconds 08:58:34 NAMES STATUS 08:58:34 policy-apex-pdp Up 15 seconds 08:58:34 policy-pap Up 16 seconds 08:58:34 kafka Up 17 seconds 08:58:34 policy-api Up 18 seconds 08:58:34 grafana Up 22 seconds 08:58:34 simulator Up 20 seconds 08:58:34 mariadb Up 22 seconds 08:58:34 prometheus Up 23 seconds 08:58:34 zookeeper Up 19 seconds 08:58:39 NAMES STATUS 08:58:39 policy-apex-pdp Up 20 seconds 08:58:39 policy-pap Up 21 seconds 08:58:39 kafka Up 22 seconds 08:58:39 policy-api Up 23 seconds 08:58:39 grafana Up 27 seconds 08:58:39 simulator Up 25 seconds 08:58:39 mariadb Up 27 seconds 08:58:39 prometheus Up 28 seconds 08:58:39 zookeeper Up 24 seconds 08:58:44 NAMES STATUS 08:58:44 policy-apex-pdp Up 25 seconds 08:58:44 policy-pap Up 26 seconds 08:58:44 kafka Up 27 seconds 08:58:44 policy-api Up 28 seconds 08:58:44 grafana Up 32 seconds 08:58:44 simulator Up 30 seconds 08:58:44 mariadb Up 32 seconds 08:58:44 prometheus Up 33 seconds 08:58:44 zookeeper Up 29 seconds 08:58:49 NAMES STATUS 08:58:49 policy-apex-pdp Up 30 seconds 08:58:49 policy-pap Up 31 seconds 08:58:49 kafka Up 32 seconds 08:58:49 policy-api Up 33 seconds 08:58:49 grafana Up 38 seconds 08:58:49 simulator Up 35 seconds 08:58:49 mariadb Up 37 seconds 08:58:49 prometheus Up 38 seconds 08:58:49 zookeeper Up 34 seconds 08:58:54 NAMES STATUS 08:58:54 policy-apex-pdp Up 35 seconds 08:58:54 policy-pap Up 36 seconds 08:58:54 kafka Up 37 seconds 08:58:54 policy-api Up 38 seconds 08:58:54 grafana Up 43 seconds 08:58:54 simulator Up 40 seconds 08:58:54 mariadb Up 42 seconds 08:58:54 prometheus Up 43 seconds 08:58:54 zookeeper Up 39 seconds 08:58:54 ++ export 'SUITES=pap-test.robot 08:58:54 pap-slas.robot' 08:58:54 ++ SUITES='pap-test.robot 08:58:54 pap-slas.robot' 08:58:54 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 08:58:54 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 08:58:54 + load_set 08:58:54 + _setopts=hxB 08:58:54 ++ echo braceexpand:hashall:interactive-comments:xtrace 08:58:54 ++ tr : ' ' 08:58:54 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:58:54 + set +o braceexpand 08:58:54 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:58:54 + set +o hashall 08:58:54 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:58:54 + set +o interactive-comments 08:58:54 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:58:54 + set +o xtrace 08:58:54 ++ echo hxB 08:58:54 ++ sed 's/./& /g' 08:58:54 + for i in $(echo "$_setopts" | sed 's/./& /g') 08:58:54 + set +h 08:58:54 + for i in $(echo "$_setopts" | sed 's/./& /g') 08:58:54 + set +x 08:58:54 + docker_stats 08:58:54 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 08:58:54 ++ uname -s 08:58:54 + '[' Linux == Darwin ']' 08:58:54 + sh -c 'top -bn1 | head -3' 08:58:54 top - 08:58:54 up 4 min, 0 users, load average: 3.97, 1.77, 0.70 08:58:54 Tasks: 209 total, 1 running, 131 sleeping, 0 stopped, 0 zombie 08:58:54 %Cpu(s): 12.5 us, 2.6 sy, 0.0 ni, 79.4 id, 5.4 wa, 0.0 hi, 0.1 si, 0.1 st 08:58:54 + echo 08:58:54 08:58:54 + sh -c 'free -h' 08:58:54 total used free shared buff/cache available 08:58:54 Mem: 31G 2.7G 22G 1.3M 6.2G 28G 08:58:54 Swap: 1.0G 0B 1.0G 08:58:54 + echo 08:58:54 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 08:58:54 08:58:54 NAMES STATUS 08:58:54 policy-apex-pdp Up 35 seconds 08:58:54 policy-pap Up 36 seconds 08:58:54 kafka Up 37 seconds 08:58:54 policy-api Up 38 seconds 08:58:54 grafana Up 43 seconds 08:58:54 simulator Up 41 seconds 08:58:54 mariadb Up 42 seconds 08:58:54 prometheus Up 44 seconds 08:58:54 zookeeper Up 39 seconds 08:58:54 + echo 08:58:54 08:58:54 + docker stats --no-stream 08:58:57 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 08:58:57 cf9114871e24 policy-apex-pdp 15.80% 178MiB / 31.41GiB 0.55% 9.93kB / 19.8kB 0B / 0B 49 08:58:57 57873205d9c3 policy-pap 5.49% 502.3MiB / 31.41GiB 1.56% 35.9kB / 38.8kB 0B / 149MB 63 08:58:57 84563581413a kafka 26.30% 381.6MiB / 31.41GiB 1.19% 82.7kB / 84.8kB 0B / 508kB 85 08:58:57 964cc166306f policy-api 0.10% 464.7MiB / 31.41GiB 1.44% 989kB / 673kB 0B / 0B 52 08:58:57 b100b2b1ca2d grafana 0.04% 58MiB / 31.41GiB 0.18% 19.2kB / 3.55kB 0B / 24.9MB 19 08:58:57 264c388f7a92 simulator 0.07% 120.5MiB / 31.41GiB 0.37% 1.31kB / 0B 0B / 0B 76 08:58:57 b52d89a02784 mariadb 0.01% 102.3MiB / 31.41GiB 0.32% 933kB / 1.18MB 11MB / 68.4MB 37 08:58:57 d3c2b924e83b prometheus 0.48% 20.21MiB / 31.41GiB 0.06% 39.5kB / 1.95kB 131kB / 0B 13 08:58:57 3a03b3ec39eb zookeeper 0.10% 101.1MiB / 31.41GiB 0.31% 61.2kB / 54.6kB 0B / 381kB 60 08:58:57 + echo 08:58:57 08:58:57 + cd /tmp/tmp.9nztubu5q5 08:58:57 + echo 'Reading the testplan:' 08:58:57 Reading the testplan: 08:58:57 + echo 'pap-test.robot 08:58:57 pap-slas.robot' 08:58:57 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 08:58:57 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 08:58:57 + cat testplan.txt 08:58:57 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 08:58:57 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 08:58:57 ++ xargs 08:58:57 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 08:58:57 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 08:58:57 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 08:58:57 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 08:58:57 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 08:58:57 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 08:58:57 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 08:58:57 + relax_set 08:58:57 + set +e 08:58:57 + set +o pipefail 08:58:57 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 08:58:57 ============================================================================== 08:58:57 pap 08:58:57 ============================================================================== 08:58:57 pap.Pap-Test 08:58:57 ============================================================================== 08:58:58 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 08:58:58 ------------------------------------------------------------------------------ 08:58:59 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 08:58:59 ------------------------------------------------------------------------------ 08:58:59 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 08:58:59 ------------------------------------------------------------------------------ 08:59:00 Healthcheck :: Verify policy pap health check | PASS | 08:59:00 ------------------------------------------------------------------------------ 08:59:20 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 08:59:20 ------------------------------------------------------------------------------ 08:59:20 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 08:59:20 ------------------------------------------------------------------------------ 08:59:21 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 08:59:21 ------------------------------------------------------------------------------ 08:59:21 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 08:59:21 ------------------------------------------------------------------------------ 08:59:21 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 08:59:21 ------------------------------------------------------------------------------ 08:59:21 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 08:59:21 ------------------------------------------------------------------------------ 08:59:22 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 08:59:22 ------------------------------------------------------------------------------ 08:59:22 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 08:59:22 ------------------------------------------------------------------------------ 08:59:22 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 08:59:22 ------------------------------------------------------------------------------ 08:59:22 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 08:59:22 ------------------------------------------------------------------------------ 08:59:23 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 08:59:23 ------------------------------------------------------------------------------ 08:59:23 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 08:59:23 ------------------------------------------------------------------------------ 08:59:23 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 08:59:23 ------------------------------------------------------------------------------ 08:59:43 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | FAIL | 08:59:43 DEPLOYMENT != UNDEPLOYMENT 08:59:43 ------------------------------------------------------------------------------ 08:59:43 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 08:59:43 ------------------------------------------------------------------------------ 08:59:43 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 08:59:43 ------------------------------------------------------------------------------ 08:59:43 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 08:59:43 ------------------------------------------------------------------------------ 08:59:44 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 08:59:44 ------------------------------------------------------------------------------ 08:59:44 pap.Pap-Test | FAIL | 08:59:44 22 tests, 21 passed, 1 failed 08:59:44 ============================================================================== 08:59:44 pap.Pap-Slas 08:59:44 ============================================================================== 09:00:44 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 09:00:44 ------------------------------------------------------------------------------ 09:00:44 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 09:00:44 ------------------------------------------------------------------------------ 09:00:44 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 09:00:44 ------------------------------------------------------------------------------ 09:00:44 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 09:00:44 ------------------------------------------------------------------------------ 09:00:44 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 09:00:44 ------------------------------------------------------------------------------ 09:00:44 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 09:00:44 ------------------------------------------------------------------------------ 09:00:44 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 09:00:44 ------------------------------------------------------------------------------ 09:00:44 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 09:00:44 ------------------------------------------------------------------------------ 09:00:44 pap.Pap-Slas | PASS | 09:00:44 8 tests, 8 passed, 0 failed 09:00:44 ============================================================================== 09:00:44 pap | FAIL | 09:00:44 30 tests, 29 passed, 1 failed 09:00:44 ============================================================================== 09:00:44 Output: /tmp/tmp.9nztubu5q5/output.xml 09:00:44 Log: /tmp/tmp.9nztubu5q5/log.html 09:00:44 Report: /tmp/tmp.9nztubu5q5/report.html 09:00:44 + RESULT=1 09:00:44 + load_set 09:00:44 + _setopts=hxB 09:00:44 ++ tr : ' ' 09:00:44 ++ echo braceexpand:hashall:interactive-comments:xtrace 09:00:44 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:00:44 + set +o braceexpand 09:00:44 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:00:44 + set +o hashall 09:00:44 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:00:44 + set +o interactive-comments 09:00:44 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:00:44 + set +o xtrace 09:00:44 ++ echo hxB 09:00:44 ++ sed 's/./& /g' 09:00:44 + for i in $(echo "$_setopts" | sed 's/./& /g') 09:00:44 + set +h 09:00:44 + for i in $(echo "$_setopts" | sed 's/./& /g') 09:00:44 + set +x 09:00:44 + echo 'RESULT: 1' 09:00:44 RESULT: 1 09:00:44 + exit 1 09:00:44 + on_exit 09:00:44 + rc=1 09:00:44 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 09:00:44 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 09:00:44 NAMES STATUS 09:00:44 policy-apex-pdp Up 2 minutes 09:00:44 policy-pap Up 2 minutes 09:00:44 kafka Up 2 minutes 09:00:44 policy-api Up 2 minutes 09:00:44 grafana Up 2 minutes 09:00:44 simulator Up 2 minutes 09:00:44 mariadb Up 2 minutes 09:00:44 prometheus Up 2 minutes 09:00:44 zookeeper Up 2 minutes 09:00:44 + docker_stats 09:00:44 ++ uname -s 09:00:44 + '[' Linux == Darwin ']' 09:00:44 + sh -c 'top -bn1 | head -3' 09:00:44 top - 09:00:44 up 6 min, 0 users, load average: 1.08, 1.42, 0.70 09:00:44 Tasks: 197 total, 1 running, 129 sleeping, 0 stopped, 0 zombie 09:00:44 %Cpu(s): 10.1 us, 2.0 sy, 0.0 ni, 83.4 id, 4.4 wa, 0.0 hi, 0.1 si, 0.1 st 09:00:44 + echo 09:00:44 09:00:44 + sh -c 'free -h' 09:00:44 total used free shared buff/cache available 09:00:44 Mem: 31G 2.7G 22G 1.3M 6.2G 28G 09:00:44 Swap: 1.0G 0B 1.0G 09:00:44 + echo 09:00:44 09:00:44 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 09:00:44 NAMES STATUS 09:00:44 policy-apex-pdp Up 2 minutes 09:00:44 policy-pap Up 2 minutes 09:00:44 kafka Up 2 minutes 09:00:44 policy-api Up 2 minutes 09:00:44 grafana Up 2 minutes 09:00:44 simulator Up 2 minutes 09:00:44 mariadb Up 2 minutes 09:00:44 prometheus Up 2 minutes 09:00:44 zookeeper Up 2 minutes 09:00:44 + echo 09:00:44 09:00:44 + docker stats --no-stream 09:00:47 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 09:00:47 cf9114871e24 policy-apex-pdp 0.45% 181.8MiB / 31.41GiB 0.57% 57.3kB / 92.1kB 0B / 0B 52 09:00:47 57873205d9c3 policy-pap 1.01% 492.3MiB / 31.41GiB 1.53% 2.47MB / 1.05MB 0B / 149MB 67 09:00:47 84563581413a kafka 2.21% 388.1MiB / 31.41GiB 1.21% 251kB / 225kB 0B / 606kB 85 09:00:47 964cc166306f policy-api 0.14% 510.9MiB / 31.41GiB 1.59% 2.45MB / 1.13MB 0B / 0B 55 09:00:47 b100b2b1ca2d grafana 0.04% 64.34MiB / 31.41GiB 0.20% 20kB / 4.5kB 0B / 24.9MB 19 09:00:47 264c388f7a92 simulator 0.09% 120.7MiB / 31.41GiB 0.38% 1.54kB / 0B 0B / 0B 78 09:00:47 b52d89a02784 mariadb 0.01% 103.6MiB / 31.41GiB 0.32% 2.02MB / 4.87MB 11MB / 68.7MB 28 09:00:47 d3c2b924e83b prometheus 0.06% 25.5MiB / 31.41GiB 0.08% 219kB / 11.8kB 131kB / 0B 13 09:00:47 3a03b3ec39eb zookeeper 0.10% 100.9MiB / 31.41GiB 0.31% 64kB / 56.1kB 0B / 381kB 60 09:00:47 + echo 09:00:47 09:00:47 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 09:00:47 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 09:00:47 + relax_set 09:00:47 + set +e 09:00:47 + set +o pipefail 09:00:47 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 09:00:47 ++ echo 'Shut down started!' 09:00:47 Shut down started! 09:00:47 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 09:00:47 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 09:00:47 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 09:00:47 ++ source export-ports.sh 09:00:47 ++ source get-versions.sh 09:00:49 ++ echo 'Collecting logs from docker compose containers...' 09:00:49 Collecting logs from docker compose containers... 09:00:49 ++ docker-compose logs 09:00:51 ++ cat docker_compose.log 09:00:51 Attaching to policy-apex-pdp, policy-pap, kafka, policy-api, policy-db-migrator, grafana, simulator, mariadb, prometheus, zookeeper 09:00:51 kafka | ===> User 09:00:51 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 09:00:51 kafka | ===> Configuring ... 09:00:51 kafka | Running in Zookeeper mode... 09:00:51 kafka | ===> Running preflight checks ... 09:00:51 kafka | ===> Check if /var/lib/kafka/data is writable ... 09:00:51 kafka | ===> Check if Zookeeper is healthy ... 09:00:51 kafka | [2024-04-24 08:58:21,002] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:21,002] INFO Client environment:host.name=84563581413a (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:21,002] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:21,002] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:21,002] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:21,003] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.1-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.6.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.1-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.1-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.6.1.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.1.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.1-ccs.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.1-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.1-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:21,003] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:21,003] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:21,003] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:21,003] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:21,003] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:21,003] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:21,003] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:21,003] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:21,003] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:21,003] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:21,003] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:21,003] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:21,006] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@b7f23d9 (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:21,009] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 09:00:51 kafka | [2024-04-24 08:58:21,013] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 09:00:51 kafka | [2024-04-24 08:58:21,020] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 09:00:51 kafka | [2024-04-24 08:58:21,034] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) 09:00:51 kafka | [2024-04-24 08:58:21,035] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 09:00:51 kafka | [2024-04-24 08:58:21,042] INFO Socket connection established, initiating session, client: /172.17.0.9:39730, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) 09:00:51 kafka | [2024-04-24 08:58:21,078] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x1000003e8bf0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 09:00:51 kafka | [2024-04-24 08:58:21,195] INFO Session: 0x1000003e8bf0000 closed (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:21,196] INFO EventThread shut down for session: 0x1000003e8bf0000 (org.apache.zookeeper.ClientCnxn) 09:00:51 kafka | Using log4j config /etc/kafka/log4j.properties 09:00:51 kafka | ===> Launching ... 09:00:51 kafka | ===> Launching kafka ... 09:00:51 kafka | [2024-04-24 08:58:21,835] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 09:00:51 kafka | [2024-04-24 08:58:22,147] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 09:00:51 kafka | [2024-04-24 08:58:22,223] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 09:00:51 kafka | [2024-04-24 08:58:22,224] INFO starting (kafka.server.KafkaServer) 09:00:51 kafka | [2024-04-24 08:58:22,224] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 09:00:51 kafka | [2024-04-24 08:58:22,235] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 09:00:51 kafka | [2024-04-24 08:58:22,239] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:22,239] INFO Client environment:host.name=84563581413a (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:22,239] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:22,239] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:22,239] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:22,239] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:22,239] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:22,239] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:22,239] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:22,239] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:22,239] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:22,239] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:22,239] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:22,239] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:22,239] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:22,239] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:22,239] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:22,239] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:22,241] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@66746f57 (org.apache.zookeeper.ZooKeeper) 09:00:51 kafka | [2024-04-24 08:58:22,244] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 09:00:51 kafka | [2024-04-24 08:58:22,249] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 09:00:51 kafka | [2024-04-24 08:58:22,251] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 09:00:51 kafka | [2024-04-24 08:58:22,255] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) 09:00:51 kafka | [2024-04-24 08:58:22,263] INFO Socket connection established, initiating session, client: /172.17.0.9:39732, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) 09:00:51 kafka | [2024-04-24 08:58:22,270] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x1000003e8bf0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 09:00:51 kafka | [2024-04-24 08:58:22,274] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 09:00:51 kafka | [2024-04-24 08:58:22,527] INFO Cluster ID = FWpz7Mn1RFGDoEChXT3QPg (kafka.server.KafkaServer) 09:00:51 kafka | [2024-04-24 08:58:22,529] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 09:00:51 kafka | [2024-04-24 08:58:22,582] INFO KafkaConfig values: 09:00:51 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 09:00:51 kafka | alter.config.policy.class.name = null 09:00:51 kafka | alter.log.dirs.replication.quota.window.num = 11 09:00:51 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 09:00:51 kafka | authorizer.class.name = 09:00:51 kafka | auto.create.topics.enable = true 09:00:51 kafka | auto.include.jmx.reporter = true 09:00:51 kafka | auto.leader.rebalance.enable = true 09:00:51 kafka | background.threads = 10 09:00:51 kafka | broker.heartbeat.interval.ms = 2000 09:00:51 kafka | broker.id = 1 09:00:51 kafka | broker.id.generation.enable = true 09:00:51 kafka | broker.rack = null 09:00:51 kafka | broker.session.timeout.ms = 9000 09:00:51 kafka | client.quota.callback.class = null 09:00:51 kafka | compression.type = producer 09:00:51 kafka | connection.failed.authentication.delay.ms = 100 09:00:51 kafka | connections.max.idle.ms = 600000 09:00:51 kafka | connections.max.reauth.ms = 0 09:00:51 kafka | control.plane.listener.name = null 09:00:51 kafka | controlled.shutdown.enable = true 09:00:51 kafka | controlled.shutdown.max.retries = 3 09:00:51 kafka | controlled.shutdown.retry.backoff.ms = 5000 09:00:51 kafka | controller.listener.names = null 09:00:51 kafka | controller.quorum.append.linger.ms = 25 09:00:51 kafka | controller.quorum.election.backoff.max.ms = 1000 09:00:51 kafka | controller.quorum.election.timeout.ms = 1000 09:00:51 kafka | controller.quorum.fetch.timeout.ms = 2000 09:00:51 kafka | controller.quorum.request.timeout.ms = 2000 09:00:51 kafka | controller.quorum.retry.backoff.ms = 20 09:00:51 kafka | controller.quorum.voters = [] 09:00:51 kafka | controller.quota.window.num = 11 09:00:51 kafka | controller.quota.window.size.seconds = 1 09:00:51 kafka | controller.socket.timeout.ms = 30000 09:00:51 kafka | create.topic.policy.class.name = null 09:00:51 kafka | default.replication.factor = 1 09:00:51 kafka | delegation.token.expiry.check.interval.ms = 3600000 09:00:51 kafka | delegation.token.expiry.time.ms = 86400000 09:00:51 kafka | delegation.token.master.key = null 09:00:51 kafka | delegation.token.max.lifetime.ms = 604800000 09:00:51 kafka | delegation.token.secret.key = null 09:00:51 kafka | delete.records.purgatory.purge.interval.requests = 1 09:00:51 kafka | delete.topic.enable = true 09:00:51 kafka | early.start.listeners = null 09:00:51 kafka | fetch.max.bytes = 57671680 09:00:51 kafka | fetch.purgatory.purge.interval.requests = 1000 09:00:51 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 09:00:51 kafka | group.consumer.heartbeat.interval.ms = 5000 09:00:51 kafka | group.consumer.max.heartbeat.interval.ms = 15000 09:00:51 kafka | group.consumer.max.session.timeout.ms = 60000 09:00:51 kafka | group.consumer.max.size = 2147483647 09:00:51 kafka | group.consumer.min.heartbeat.interval.ms = 5000 09:00:51 kafka | group.consumer.min.session.timeout.ms = 45000 09:00:51 kafka | group.consumer.session.timeout.ms = 45000 09:00:51 kafka | group.coordinator.new.enable = false 09:00:51 kafka | group.coordinator.threads = 1 09:00:51 kafka | group.initial.rebalance.delay.ms = 3000 09:00:51 kafka | group.max.session.timeout.ms = 1800000 09:00:51 kafka | group.max.size = 2147483647 09:00:51 kafka | group.min.session.timeout.ms = 6000 09:00:51 kafka | initial.broker.registration.timeout.ms = 60000 09:00:51 kafka | inter.broker.listener.name = PLAINTEXT 09:00:51 kafka | inter.broker.protocol.version = 3.6-IV2 09:00:51 kafka | kafka.metrics.polling.interval.secs = 10 09:00:51 kafka | kafka.metrics.reporters = [] 09:00:51 kafka | leader.imbalance.check.interval.seconds = 300 09:00:51 kafka | leader.imbalance.per.broker.percentage = 10 09:00:51 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 09:00:51 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 09:00:51 kafka | log.cleaner.backoff.ms = 15000 09:00:51 kafka | log.cleaner.dedupe.buffer.size = 134217728 09:00:51 kafka | log.cleaner.delete.retention.ms = 86400000 09:00:51 kafka | log.cleaner.enable = true 09:00:51 kafka | log.cleaner.io.buffer.load.factor = 0.9 09:00:51 kafka | log.cleaner.io.buffer.size = 524288 09:00:51 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 09:00:51 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 09:00:51 kafka | log.cleaner.min.cleanable.ratio = 0.5 09:00:51 kafka | log.cleaner.min.compaction.lag.ms = 0 09:00:51 kafka | log.cleaner.threads = 1 09:00:51 kafka | log.cleanup.policy = [delete] 09:00:51 kafka | log.dir = /tmp/kafka-logs 09:00:51 kafka | log.dirs = /var/lib/kafka/data 09:00:51 kafka | log.flush.interval.messages = 9223372036854775807 09:00:51 kafka | log.flush.interval.ms = null 09:00:51 kafka | log.flush.offset.checkpoint.interval.ms = 60000 09:00:51 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 09:00:51 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 09:00:51 kafka | log.index.interval.bytes = 4096 09:00:51 kafka | log.index.size.max.bytes = 10485760 09:00:51 kafka | log.local.retention.bytes = -2 09:00:51 kafka | log.local.retention.ms = -2 09:00:51 kafka | log.message.downconversion.enable = true 09:00:51 kafka | log.message.format.version = 3.0-IV1 09:00:51 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 09:00:51 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 09:00:51 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 09:00:51 kafka | log.message.timestamp.type = CreateTime 09:00:51 kafka | log.preallocate = false 09:00:51 kafka | log.retention.bytes = -1 09:00:51 kafka | log.retention.check.interval.ms = 300000 09:00:51 kafka | log.retention.hours = 168 09:00:51 kafka | log.retention.minutes = null 09:00:51 kafka | log.retention.ms = null 09:00:51 kafka | log.roll.hours = 168 09:00:51 kafka | log.roll.jitter.hours = 0 09:00:51 kafka | log.roll.jitter.ms = null 09:00:51 kafka | log.roll.ms = null 09:00:51 kafka | log.segment.bytes = 1073741824 09:00:51 kafka | log.segment.delete.delay.ms = 60000 09:00:51 kafka | max.connection.creation.rate = 2147483647 09:00:51 kafka | max.connections = 2147483647 09:00:51 kafka | max.connections.per.ip = 2147483647 09:00:51 kafka | max.connections.per.ip.overrides = 09:00:51 kafka | max.incremental.fetch.session.cache.slots = 1000 09:00:51 kafka | message.max.bytes = 1048588 09:00:51 kafka | metadata.log.dir = null 09:00:51 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 09:00:51 kafka | metadata.log.max.snapshot.interval.ms = 3600000 09:00:51 kafka | metadata.log.segment.bytes = 1073741824 09:00:51 kafka | metadata.log.segment.min.bytes = 8388608 09:00:51 kafka | metadata.log.segment.ms = 604800000 09:00:51 kafka | metadata.max.idle.interval.ms = 500 09:00:51 kafka | metadata.max.retention.bytes = 104857600 09:00:51 kafka | metadata.max.retention.ms = 604800000 09:00:51 kafka | metric.reporters = [] 09:00:51 kafka | metrics.num.samples = 2 09:00:51 kafka | metrics.recording.level = INFO 09:00:51 kafka | metrics.sample.window.ms = 30000 09:00:51 kafka | min.insync.replicas = 1 09:00:51 kafka | node.id = 1 09:00:51 kafka | num.io.threads = 8 09:00:51 kafka | num.network.threads = 3 09:00:51 kafka | num.partitions = 1 09:00:51 kafka | num.recovery.threads.per.data.dir = 1 09:00:51 kafka | num.replica.alter.log.dirs.threads = null 09:00:51 kafka | num.replica.fetchers = 1 09:00:51 kafka | offset.metadata.max.bytes = 4096 09:00:51 kafka | offsets.commit.required.acks = -1 09:00:51 kafka | offsets.commit.timeout.ms = 5000 09:00:51 kafka | offsets.load.buffer.size = 5242880 09:00:51 kafka | offsets.retention.check.interval.ms = 600000 09:00:51 kafka | offsets.retention.minutes = 10080 09:00:51 kafka | offsets.topic.compression.codec = 0 09:00:51 kafka | offsets.topic.num.partitions = 50 09:00:51 kafka | offsets.topic.replication.factor = 1 09:00:51 kafka | offsets.topic.segment.bytes = 104857600 09:00:51 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 09:00:51 kafka | password.encoder.iterations = 4096 09:00:51 kafka | password.encoder.key.length = 128 09:00:51 kafka | password.encoder.keyfactory.algorithm = null 09:00:51 kafka | password.encoder.old.secret = null 09:00:51 kafka | password.encoder.secret = null 09:00:51 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 09:00:51 kafka | process.roles = [] 09:00:51 kafka | producer.id.expiration.check.interval.ms = 600000 09:00:51 kafka | producer.id.expiration.ms = 86400000 09:00:51 kafka | producer.purgatory.purge.interval.requests = 1000 09:00:51 kafka | queued.max.request.bytes = -1 09:00:51 kafka | queued.max.requests = 500 09:00:51 kafka | quota.window.num = 11 09:00:51 kafka | quota.window.size.seconds = 1 09:00:51 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 09:00:51 kafka | remote.log.manager.task.interval.ms = 30000 09:00:51 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 09:00:51 kafka | remote.log.manager.task.retry.backoff.ms = 500 09:00:51 kafka | remote.log.manager.task.retry.jitter = 0.2 09:00:51 kafka | remote.log.manager.thread.pool.size = 10 09:00:51 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 09:00:51 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 09:00:51 kafka | remote.log.metadata.manager.class.path = null 09:00:51 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 09:00:51 kafka | remote.log.metadata.manager.listener.name = null 09:00:51 kafka | remote.log.reader.max.pending.tasks = 100 09:00:51 kafka | remote.log.reader.threads = 10 09:00:51 kafka | remote.log.storage.manager.class.name = null 09:00:51 kafka | remote.log.storage.manager.class.path = null 09:00:51 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 09:00:51 kafka | remote.log.storage.system.enable = false 09:00:51 kafka | replica.fetch.backoff.ms = 1000 09:00:51 kafka | replica.fetch.max.bytes = 1048576 09:00:51 kafka | replica.fetch.min.bytes = 1 09:00:51 kafka | replica.fetch.response.max.bytes = 10485760 09:00:51 kafka | replica.fetch.wait.max.ms = 500 09:00:51 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 09:00:51 kafka | replica.lag.time.max.ms = 30000 09:00:51 kafka | replica.selector.class = null 09:00:51 kafka | replica.socket.receive.buffer.bytes = 65536 09:00:51 kafka | replica.socket.timeout.ms = 30000 09:00:51 kafka | replication.quota.window.num = 11 09:00:51 kafka | replication.quota.window.size.seconds = 1 09:00:51 kafka | request.timeout.ms = 30000 09:00:51 kafka | reserved.broker.max.id = 1000 09:00:51 kafka | sasl.client.callback.handler.class = null 09:00:51 kafka | sasl.enabled.mechanisms = [GSSAPI] 09:00:51 kafka | sasl.jaas.config = null 09:00:51 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:00:51 kafka | sasl.kerberos.min.time.before.relogin = 60000 09:00:51 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 09:00:51 kafka | sasl.kerberos.service.name = null 09:00:51 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 09:00:51 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 09:00:51 kafka | sasl.login.callback.handler.class = null 09:00:51 kafka | sasl.login.class = null 09:00:51 kafka | sasl.login.connect.timeout.ms = null 09:00:51 kafka | sasl.login.read.timeout.ms = null 09:00:51 kafka | sasl.login.refresh.buffer.seconds = 300 09:00:51 kafka | sasl.login.refresh.min.period.seconds = 60 09:00:51 kafka | sasl.login.refresh.window.factor = 0.8 09:00:51 kafka | sasl.login.refresh.window.jitter = 0.05 09:00:51 kafka | sasl.login.retry.backoff.max.ms = 10000 09:00:51 kafka | sasl.login.retry.backoff.ms = 100 09:00:51 kafka | sasl.mechanism.controller.protocol = GSSAPI 09:00:51 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 09:00:51 kafka | sasl.oauthbearer.clock.skew.seconds = 30 09:00:51 kafka | sasl.oauthbearer.expected.audience = null 09:00:51 kafka | sasl.oauthbearer.expected.issuer = null 09:00:51 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:00:51 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:00:51 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:00:51 kafka | sasl.oauthbearer.jwks.endpoint.url = null 09:00:51 kafka | sasl.oauthbearer.scope.claim.name = scope 09:00:51 kafka | sasl.oauthbearer.sub.claim.name = sub 09:00:51 kafka | sasl.oauthbearer.token.endpoint.url = null 09:00:51 kafka | sasl.server.callback.handler.class = null 09:00:51 kafka | sasl.server.max.receive.size = 524288 09:00:51 kafka | security.inter.broker.protocol = PLAINTEXT 09:00:51 kafka | security.providers = null 09:00:51 kafka | server.max.startup.time.ms = 9223372036854775807 09:00:51 kafka | socket.connection.setup.timeout.max.ms = 30000 09:00:51 kafka | socket.connection.setup.timeout.ms = 10000 09:00:51 kafka | socket.listen.backlog.size = 50 09:00:51 kafka | socket.receive.buffer.bytes = 102400 09:00:51 kafka | socket.request.max.bytes = 104857600 09:00:51 kafka | socket.send.buffer.bytes = 102400 09:00:51 kafka | ssl.cipher.suites = [] 09:00:51 kafka | ssl.client.auth = none 09:00:51 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:00:51 kafka | ssl.endpoint.identification.algorithm = https 09:00:51 kafka | ssl.engine.factory.class = null 09:00:51 kafka | ssl.key.password = null 09:00:51 kafka | ssl.keymanager.algorithm = SunX509 09:00:51 kafka | ssl.keystore.certificate.chain = null 09:00:51 policy-apex-pdp | Waiting for mariadb port 3306... 09:00:51 policy-apex-pdp | mariadb (172.17.0.4:3306) open 09:00:51 policy-apex-pdp | Waiting for kafka port 9092... 09:00:51 policy-apex-pdp | kafka (172.17.0.9:9092) open 09:00:51 policy-apex-pdp | Waiting for pap port 6969... 09:00:51 policy-apex-pdp | pap (172.17.0.10:6969) open 09:00:51 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 09:00:51 policy-apex-pdp | [2024-04-24T08:58:51.903+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.077+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 09:00:51 policy-apex-pdp | allow.auto.create.topics = true 09:00:51 policy-apex-pdp | auto.commit.interval.ms = 5000 09:00:51 policy-apex-pdp | auto.include.jmx.reporter = true 09:00:51 policy-apex-pdp | auto.offset.reset = latest 09:00:51 policy-apex-pdp | bootstrap.servers = [kafka:9092] 09:00:51 policy-apex-pdp | check.crcs = true 09:00:51 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 09:00:51 policy-apex-pdp | client.id = consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-1 09:00:51 policy-apex-pdp | client.rack = 09:00:51 policy-apex-pdp | connections.max.idle.ms = 540000 09:00:51 policy-apex-pdp | default.api.timeout.ms = 60000 09:00:51 policy-apex-pdp | enable.auto.commit = true 09:00:51 policy-apex-pdp | exclude.internal.topics = true 09:00:51 policy-apex-pdp | fetch.max.bytes = 52428800 09:00:51 policy-apex-pdp | fetch.max.wait.ms = 500 09:00:51 policy-apex-pdp | fetch.min.bytes = 1 09:00:51 policy-apex-pdp | group.id = 6c14929a-34c8-48a0-adf2-d542a07b4ce8 09:00:51 policy-apex-pdp | group.instance.id = null 09:00:51 policy-apex-pdp | heartbeat.interval.ms = 3000 09:00:51 policy-apex-pdp | interceptor.classes = [] 09:00:51 policy-apex-pdp | internal.leave.group.on.close = true 09:00:51 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 09:00:51 policy-apex-pdp | isolation.level = read_uncommitted 09:00:51 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:00:51 policy-apex-pdp | max.partition.fetch.bytes = 1048576 09:00:51 policy-apex-pdp | max.poll.interval.ms = 300000 09:00:51 policy-apex-pdp | max.poll.records = 500 09:00:51 policy-apex-pdp | metadata.max.age.ms = 300000 09:00:51 policy-apex-pdp | metric.reporters = [] 09:00:51 policy-apex-pdp | metrics.num.samples = 2 09:00:51 policy-apex-pdp | metrics.recording.level = INFO 09:00:51 grafana | logger=settings t=2024-04-24T08:58:11.625991891Z level=info msg="Starting Grafana" version=10.4.2 commit=701c851be7a930e04fbc6ebb1cd4254da80edd4c branch=v10.4.x compiled=2024-04-24T08:58:11Z 09:00:51 policy-apex-pdp | metrics.sample.window.ms = 30000 09:00:51 policy-db-migrator | Waiting for mariadb port 3306... 09:00:51 kafka | ssl.keystore.key = null 09:00:51 grafana | logger=settings t=2024-04-24T08:58:11.626843575Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 09:00:51 mariadb | 2024-04-24 08:58:11+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 09:00:51 policy-pap | Waiting for mariadb port 3306... 09:00:51 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 09:00:51 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 09:00:51 policy-api | Waiting for mariadb port 3306... 09:00:51 policy-api | mariadb (172.17.0.4:3306) open 09:00:51 prometheus | ts=2024-04-24T08:58:10.729Z caller=main.go:573 level=info msg="No time or size retention was set so using the default time retention" duration=15d 09:00:51 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 09:00:51 grafana | logger=settings t=2024-04-24T08:58:11.626871776Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 09:00:51 mariadb | 2024-04-24 08:58:11+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 09:00:51 policy-pap | mariadb (172.17.0.4:3306) open 09:00:51 policy-apex-pdp | receive.buffer.bytes = 65536 09:00:51 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 09:00:51 kafka | ssl.keystore.location = null 09:00:51 policy-api | Waiting for policy-db-migrator port 6824... 09:00:51 zookeeper | ===> User 09:00:51 prometheus | ts=2024-04-24T08:58:10.729Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.2, branch=HEAD, revision=b4c0ab52c3e9b940ab803581ddae9b3d9a452337)" 09:00:51 simulator | overriding logback.xml 09:00:51 grafana | logger=settings t=2024-04-24T08:58:11.626881596Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 09:00:51 mariadb | 2024-04-24 08:58:11+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 09:00:51 policy-pap | Waiting for kafka port 9092... 09:00:51 policy-apex-pdp | reconnect.backoff.max.ms = 1000 09:00:51 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 09:00:51 kafka | ssl.keystore.password = null 09:00:51 policy-api | policy-db-migrator (172.17.0.7:6824) open 09:00:51 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 09:00:51 prometheus | ts=2024-04-24T08:58:10.729Z caller=main.go:622 level=info build_context="(go=go1.22.2, platform=linux/amd64, user=root@b63f02a423d9, date=20240410-14:05:54, tags=netgo,builtinassets,stringlabels)" 09:00:51 simulator | 2024-04-24 08:58:14,232 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 09:00:51 grafana | logger=settings t=2024-04-24T08:58:11.626938707Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 09:00:51 mariadb | 2024-04-24 08:58:12+00:00 [Note] [Entrypoint]: Initializing database files 09:00:51 policy-pap | kafka (172.17.0.9:9092) open 09:00:51 policy-apex-pdp | reconnect.backoff.ms = 50 09:00:51 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 09:00:51 kafka | ssl.keystore.type = JKS 09:00:51 zookeeper | ===> Configuring ... 09:00:51 prometheus | ts=2024-04-24T08:58:10.729Z caller=main.go:623 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 09:00:51 simulator | 2024-04-24 08:58:14,326 INFO org.onap.policy.models.simulators starting 09:00:51 grafana | logger=settings t=2024-04-24T08:58:11.626949827Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 09:00:51 mariadb | 2024-04-24 8:58:12 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 09:00:51 policy-pap | Waiting for api port 6969... 09:00:51 policy-apex-pdp | request.timeout.ms = 30000 09:00:51 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 09:00:51 kafka | ssl.principal.mapping.rules = DEFAULT 09:00:51 zookeeper | ===> Running preflight checks ... 09:00:51 prometheus | ts=2024-04-24T08:58:10.729Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 09:00:51 simulator | 2024-04-24 08:58:14,326 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 09:00:51 grafana | logger=settings t=2024-04-24T08:58:11.626968477Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 09:00:51 mariadb | 2024-04-24 8:58:12 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 09:00:51 policy-pap | api (172.17.0.8:6969) open 09:00:51 policy-apex-pdp | retry.backoff.ms = 100 09:00:51 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 09:00:51 kafka | ssl.protocol = TLSv1.3 09:00:51 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 09:00:51 prometheus | ts=2024-04-24T08:58:10.729Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 09:00:51 simulator | 2024-04-24 08:58:14,546 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 09:00:51 grafana | logger=settings t=2024-04-24T08:58:11.627035868Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 09:00:51 mariadb | 2024-04-24 8:58:12 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 09:00:51 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 09:00:51 policy-apex-pdp | sasl.client.callback.handler.class = null 09:00:51 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 09:00:51 kafka | ssl.provider = null 09:00:51 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 09:00:51 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 09:00:51 prometheus | ts=2024-04-24T08:58:10.731Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 09:00:51 simulator | 2024-04-24 08:58:14,548 INFO org.onap.policy.models.simulators starting A&AI simulator 09:00:51 grafana | logger=settings t=2024-04-24T08:58:11.627095029Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 09:00:51 mariadb | 09:00:51 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 09:00:51 policy-apex-pdp | sasl.jaas.config = null 09:00:51 policy-db-migrator | Connection to mariadb (172.17.0.4) 3306 port [tcp/mysql] succeeded! 09:00:51 kafka | ssl.secure.random.implementation = null 09:00:51 policy-api | 09:00:51 zookeeper | ===> Launching ... 09:00:51 prometheus | ts=2024-04-24T08:58:10.732Z caller=main.go:1129 level=info msg="Starting TSDB ..." 09:00:51 simulator | 2024-04-24 08:58:14,680 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 09:00:51 grafana | logger=settings t=2024-04-24T08:58:11.627113939Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 09:00:51 mariadb | 09:00:51 policy-pap | 09:00:51 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:00:51 policy-db-migrator | 321 blocks 09:00:51 kafka | ssl.trustmanager.algorithm = PKIX 09:00:51 policy-api | . ____ _ __ _ _ 09:00:51 zookeeper | ===> Launching zookeeper ... 09:00:51 prometheus | ts=2024-04-24T08:58:10.737Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 09:00:51 simulator | 2024-04-24 08:58:14,695 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 09:00:51 grafana | logger=settings t=2024-04-24T08:58:11.627121409Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 09:00:51 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 09:00:51 policy-pap | . ____ _ __ _ _ 09:00:51 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 09:00:51 policy-db-migrator | Preparing upgrade release version: 0800 09:00:51 kafka | ssl.truststore.certificates = null 09:00:51 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 09:00:51 zookeeper | [2024-04-24 08:58:18,606] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:00:51 prometheus | ts=2024-04-24T08:58:10.737Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 09:00:51 simulator | 2024-04-24 08:58:14,697 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 09:00:51 grafana | logger=settings t=2024-04-24T08:58:11.627181871Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 09:00:51 mariadb | To do so, start the server, then issue the following command: 09:00:51 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 09:00:51 policy-apex-pdp | sasl.kerberos.service.name = null 09:00:51 policy-db-migrator | Preparing upgrade release version: 0900 09:00:51 kafka | ssl.truststore.location = null 09:00:51 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 09:00:51 zookeeper | [2024-04-24 08:58:18,612] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:00:51 prometheus | ts=2024-04-24T08:58:10.738Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 09:00:51 simulator | 2024-04-24 08:58:14,704 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 09:00:51 grafana | logger=settings t=2024-04-24T08:58:11.627237722Z level=info msg=Target target=[all] 09:00:51 mariadb | 09:00:51 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 09:00:51 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 09:00:51 policy-db-migrator | Preparing upgrade release version: 1000 09:00:51 kafka | ssl.truststore.password = null 09:00:51 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 09:00:51 zookeeper | [2024-04-24 08:58:18,612] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:00:51 prometheus | ts=2024-04-24T08:58:10.739Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=25.66µs 09:00:51 simulator | 2024-04-24 08:58:14,756 INFO Session workerName=node0 09:00:51 grafana | logger=settings t=2024-04-24T08:58:11.627277093Z level=info msg="Path Home" path=/usr/share/grafana 09:00:51 mariadb | '/usr/bin/mysql_secure_installation' 09:00:51 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 09:00:51 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 09:00:51 policy-db-migrator | Preparing upgrade release version: 1100 09:00:51 kafka | ssl.truststore.type = JKS 09:00:51 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 09:00:51 zookeeper | [2024-04-24 08:58:18,612] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:00:51 prometheus | ts=2024-04-24T08:58:10.739Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 09:00:51 simulator | 2024-04-24 08:58:15,403 INFO Using GSON for REST calls 09:00:51 grafana | logger=settings t=2024-04-24T08:58:11.627333954Z level=info msg="Path Data" path=/var/lib/grafana 09:00:51 mariadb | 09:00:51 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 09:00:51 policy-apex-pdp | sasl.login.callback.handler.class = null 09:00:51 policy-db-migrator | Preparing upgrade release version: 1200 09:00:51 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 09:00:51 policy-api | =========|_|==============|___/=/_/_/_/ 09:00:51 zookeeper | [2024-04-24 08:58:18,612] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:00:51 prometheus | ts=2024-04-24T08:58:10.739Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 09:00:51 simulator | 2024-04-24 08:58:15,492 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} 09:00:51 grafana | logger=settings t=2024-04-24T08:58:11.627418985Z level=info msg="Path Logs" path=/var/log/grafana 09:00:51 mariadb | which will also give you the option of removing the test 09:00:51 policy-pap | =========|_|==============|___/=/_/_/_/ 09:00:51 policy-apex-pdp | sasl.login.class = null 09:00:51 policy-db-migrator | Preparing upgrade release version: 1300 09:00:51 kafka | transaction.max.timeout.ms = 900000 09:00:51 policy-api | :: Spring Boot :: (v3.1.10) 09:00:51 zookeeper | [2024-04-24 08:58:18,614] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 09:00:51 prometheus | ts=2024-04-24T08:58:10.739Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=46.321µs wal_replay_duration=482.4µs wbl_replay_duration=430ns total_replay_duration=686.503µs 09:00:51 simulator | 2024-04-24 08:58:15,501 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 09:00:51 grafana | logger=settings t=2024-04-24T08:58:11.627467576Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 09:00:51 mariadb | databases and anonymous user created by default. This is 09:00:51 policy-pap | :: Spring Boot :: (v3.1.10) 09:00:51 policy-apex-pdp | sasl.login.connect.timeout.ms = null 09:00:51 policy-db-migrator | Done 09:00:51 kafka | transaction.partition.verification.enable = true 09:00:51 policy-api | 09:00:51 zookeeper | [2024-04-24 08:58:18,614] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 09:00:51 prometheus | ts=2024-04-24T08:58:10.742Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 09:00:51 simulator | 2024-04-24 08:58:15,511 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1759ms 09:00:51 grafana | logger=settings t=2024-04-24T08:58:11.627494546Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 09:00:51 mariadb | strongly recommended for production servers. 09:00:51 policy-pap | 09:00:51 policy-apex-pdp | sasl.login.read.timeout.ms = null 09:00:51 policy-db-migrator | name version 09:00:51 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 09:00:51 policy-api | [2024-04-24T08:58:28.310+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 09:00:51 zookeeper | [2024-04-24 08:58:18,614] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 09:00:51 prometheus | ts=2024-04-24T08:58:10.742Z caller=main.go:1153 level=info msg="TSDB started" 09:00:51 simulator | 2024-04-24 08:58:15,511 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4186 ms. 09:00:51 grafana | logger=settings t=2024-04-24T08:58:11.627504316Z level=info msg="App mode production" 09:00:51 mariadb | 09:00:51 policy-pap | [2024-04-24T08:58:41.675+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 09:00:51 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 09:00:51 policy-db-migrator | policyadmin 0 09:00:51 kafka | transaction.state.log.load.buffer.size = 5242880 09:00:51 policy-api | [2024-04-24T08:58:28.365+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 20 (/app/api.jar started by policy in /opt/app/policy/api/bin) 09:00:51 zookeeper | [2024-04-24 08:58:18,614] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 09:00:51 prometheus | ts=2024-04-24T08:58:10.742Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 09:00:51 simulator | 2024-04-24 08:58:15,515 INFO org.onap.policy.models.simulators starting SDNC simulator 09:00:51 grafana | logger=sqlstore t=2024-04-24T08:58:11.628757818Z level=info msg="Connecting to DB" dbtype=sqlite3 09:00:51 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 09:00:51 policy-pap | [2024-04-24T08:58:41.732+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 33 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 09:00:51 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 09:00:51 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 09:00:51 kafka | transaction.state.log.min.isr = 2 09:00:51 policy-api | [2024-04-24T08:58:28.366+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 09:00:51 zookeeper | [2024-04-24 08:58:18,616] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 09:00:51 prometheus | ts=2024-04-24T08:58:10.743Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.04463ms db_storage=1.54µs remote_storage=2.79µs web_handler=550ns query_engine=1.12µs scrape=290.075µs scrape_sd=127.273µs notify=27.141µs notify_sd=12.14µs rules=2.53µs tracing=4.9µs 09:00:51 simulator | 2024-04-24 08:58:15,517 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 09:00:51 grafana | logger=sqlstore t=2024-04-24T08:58:11.628842659Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 09:00:51 mariadb | 09:00:51 policy-pap | [2024-04-24T08:58:41.733+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 09:00:51 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 09:00:51 policy-db-migrator | upgrade: 0 -> 1300 09:00:51 kafka | transaction.state.log.num.partitions = 50 09:00:51 policy-api | [2024-04-24T08:58:30.264+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 09:00:51 zookeeper | [2024-04-24 08:58:18,616] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:00:51 prometheus | ts=2024-04-24T08:58:10.743Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 09:00:51 simulator | 2024-04-24 08:58:15,517 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.631354072Z level=info msg="Starting DB migrations" 09:00:51 mariadb | Please report any problems at https://mariadb.org/jira 09:00:51 policy-pap | [2024-04-24T08:58:43.621+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 09:00:51 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 09:00:51 policy-db-migrator | 09:00:51 kafka | transaction.state.log.replication.factor = 3 09:00:51 policy-api | [2024-04-24T08:58:30.338+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 66 ms. Found 6 JPA repository interfaces. 09:00:51 zookeeper | [2024-04-24 08:58:18,616] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:00:51 prometheus | ts=2024-04-24T08:58:10.743Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 09:00:51 simulator | 2024-04-24 08:58:15,522 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.633260824Z level=info msg="Executing migration" id="create migration_log table" 09:00:51 policy-pap | [2024-04-24T08:58:43.708+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 78 ms. Found 7 JPA repository interfaces. 09:00:51 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 09:00:51 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 09:00:51 kafka | transaction.state.log.segment.bytes = 104857600 09:00:51 policy-api | [2024-04-24T08:58:30.735+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 09:00:51 zookeeper | [2024-04-24 08:58:18,616] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:00:51 simulator | 2024-04-24 08:58:15,524 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.634443364Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.18311ms 09:00:51 mariadb | 09:00:51 policy-pap | [2024-04-24T08:58:44.134+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 09:00:51 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | transactional.id.expiration.ms = 604800000 09:00:51 policy-api | [2024-04-24T08:58:30.735+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 09:00:51 zookeeper | [2024-04-24 08:58:18,616] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:00:51 simulator | 2024-04-24 08:58:15,547 INFO Session workerName=node0 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.638823848Z level=info msg="Executing migration" id="create user table" 09:00:51 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 09:00:51 policy-pap | [2024-04-24T08:58:44.134+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 09:00:51 policy-apex-pdp | sasl.mechanism = GSSAPI 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 09:00:51 kafka | unclean.leader.election.enable = false 09:00:51 policy-api | [2024-04-24T08:58:31.395+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 09:00:51 zookeeper | [2024-04-24 08:58:18,616] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:00:51 simulator | 2024-04-24 08:58:15,604 INFO Using GSON for REST calls 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.639454469Z level=info msg="Migration successfully executed" id="create user table" duration=632.441µs 09:00:51 mariadb | 09:00:51 policy-pap | [2024-04-24T08:58:44.774+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 09:00:51 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | unstable.api.versions.enable = false 09:00:51 policy-api | [2024-04-24T08:58:31.406+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 09:00:51 zookeeper | [2024-04-24 08:58:18,616] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 09:00:51 simulator | 2024-04-24 08:58:15,615 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.645353769Z level=info msg="Executing migration" id="add unique index user.login" 09:00:51 mariadb | Consider joining MariaDB's strong and vibrant community: 09:00:51 policy-pap | [2024-04-24T08:58:44.784+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 09:00:51 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 09:00:51 policy-db-migrator | 09:00:51 kafka | zookeeper.clientCnxnSocket = null 09:00:51 policy-api | [2024-04-24T08:58:31.408+00:00|INFO|StandardService|main] Starting service [Tomcat] 09:00:51 zookeeper | [2024-04-24 08:58:18,631] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3246fb96 (org.apache.zookeeper.server.ServerMetrics) 09:00:51 simulator | 2024-04-24 08:58:15,618 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.646159592Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=808.844µs 09:00:51 mariadb | https://mariadb.org/get-involved/ 09:00:51 policy-pap | [2024-04-24T08:58:44.786+00:00|INFO|StandardService|main] Starting service [Tomcat] 09:00:51 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 09:00:51 policy-db-migrator | 09:00:51 kafka | zookeeper.connect = zookeeper:2181 09:00:51 policy-api | [2024-04-24T08:58:31.408+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 09:00:51 zookeeper | [2024-04-24 08:58:18,635] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 09:00:51 simulator | 2024-04-24 08:58:15,618 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @1866ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.65016414Z level=info msg="Executing migration" id="add unique index user.email" 09:00:51 mariadb | 09:00:51 policy-pap | [2024-04-24T08:58:44.786+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 09:00:51 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:00:51 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 09:00:51 kafka | zookeeper.connection.timeout.ms = null 09:00:51 policy-api | [2024-04-24T08:58:31.495+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 09:00:51 zookeeper | [2024-04-24 08:58:18,635] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.651019855Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=854.036µs 09:00:51 simulator | 2024-04-24 08:58:15,618 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4900 ms. 09:00:51 mariadb | 2024-04-24 08:58:14+00:00 [Note] [Entrypoint]: Database files initialized 09:00:51 policy-pap | [2024-04-24T08:58:44.880+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 09:00:51 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | zookeeper.max.in.flight.requests = 10 09:00:51 policy-api | [2024-04-24T08:58:31.495+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3062 ms 09:00:51 zookeeper | [2024-04-24 08:58:18,637] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.654987171Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 09:00:51 simulator | 2024-04-24 08:58:15,619 INFO org.onap.policy.models.simulators starting SO simulator 09:00:51 mariadb | 2024-04-24 08:58:14+00:00 [Note] [Entrypoint]: Starting temporary server 09:00:51 policy-pap | [2024-04-24T08:58:44.880+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3080 ms 09:00:51 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 09:00:51 kafka | zookeeper.metadata.migration.enable = false 09:00:51 policy-api | [2024-04-24T08:58:31.890+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 09:00:51 zookeeper | [2024-04-24 08:58:18,649] INFO (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.656031659Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.043698ms 09:00:51 simulator | 2024-04-24 08:58:15,623 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 09:00:51 mariadb | 2024-04-24 08:58:14+00:00 [Note] [Entrypoint]: Waiting for server startup 09:00:51 policy-pap | [2024-04-24T08:58:45.319+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 09:00:51 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | zookeeper.metadata.migration.min.batch.size = 200 09:00:51 policy-api | [2024-04-24T08:58:31.967+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final 09:00:51 zookeeper | [2024-04-24 08:58:18,649] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.65959456Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 09:00:51 simulator | 2024-04-24 08:58:15,623 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 09:00:51 mariadb | 2024-04-24 8:58:14 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 96 ... 09:00:51 policy-pap | [2024-04-24T08:58:45.376+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final 09:00:51 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 09:00:51 policy-db-migrator | 09:00:51 kafka | zookeeper.session.timeout.ms = 18000 09:00:51 policy-api | [2024-04-24T08:58:32.011+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 09:00:51 zookeeper | [2024-04-24 08:58:18,649] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.660281241Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=686.171µs 09:00:51 simulator | 2024-04-24 08:58:15,625 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 09:00:51 mariadb | 2024-04-24 8:58:14 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 09:00:51 policy-pap | [2024-04-24T08:58:45.796+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 09:00:51 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 09:00:51 policy-db-migrator | 09:00:51 kafka | zookeeper.set.acl = false 09:00:51 zookeeper | [2024-04-24 08:58:18,649] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 simulator | 2024-04-24 08:58:15,626 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 09:00:51 mariadb | 2024-04-24 8:58:14 0 [Note] InnoDB: Number of transaction pools: 1 09:00:51 policy-pap | [2024-04-24T08:58:45.901+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@4ee5b2d9 09:00:51 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 09:00:51 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 09:00:51 kafka | zookeeper.ssl.cipher.suites = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.66553615Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 09:00:51 zookeeper | [2024-04-24 08:58:18,649] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 simulator | 2024-04-24 08:58:15,660 INFO Session workerName=node0 09:00:51 mariadb | 2024-04-24 8:58:14 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 09:00:51 policy-pap | [2024-04-24T08:58:45.903+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 09:00:51 policy-apex-pdp | security.protocol = PLAINTEXT 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.667923311Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.388011ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.671044343Z level=info msg="Executing migration" id="create user table v2" 09:00:51 zookeeper | [2024-04-24 08:58:18,649] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 simulator | 2024-04-24 08:58:15,733 INFO Using GSON for REST calls 09:00:51 mariadb | 2024-04-24 8:58:14 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 09:00:51 policy-pap | [2024-04-24T08:58:45.934+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect 09:00:51 policy-apex-pdp | security.providers = null 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.671916508Z level=info msg="Migration successfully executed" id="create user table v2" duration=871.655µs 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.675094542Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 09:00:51 zookeeper | [2024-04-24 08:58:18,649] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 simulator | 2024-04-24 08:58:15,748 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} 09:00:51 mariadb | 2024-04-24 8:58:14 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 09:00:51 policy-pap | [2024-04-24T08:58:47.521+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] 09:00:51 policy-apex-pdp | send.buffer.bytes = 131072 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | zookeeper.ssl.client.enable = false 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.676253222Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.15614ms 09:00:51 policy-api | [2024-04-24T08:58:32.290+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 09:00:51 zookeeper | [2024-04-24 08:58:18,649] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 simulator | 2024-04-24 08:58:15,749 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 09:00:51 mariadb | 2024-04-24 8:58:14 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 09:00:51 policy-pap | [2024-04-24T08:58:47.531+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 09:00:51 policy-apex-pdp | session.timeout.ms = 45000 09:00:51 policy-db-migrator | 09:00:51 kafka | zookeeper.ssl.crl.enable = false 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.681934798Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 09:00:51 policy-api | [2024-04-24T08:58:32.320+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 09:00:51 zookeeper | [2024-04-24 08:58:18,649] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 simulator | 2024-04-24 08:58:15,749 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @1997ms 09:00:51 mariadb | 2024-04-24 8:58:14 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 09:00:51 policy-pap | [2024-04-24T08:58:48.070+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 09:00:51 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 09:00:51 policy-db-migrator | 09:00:51 kafka | zookeeper.ssl.enabled.protocols = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.683879231Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.944803ms 09:00:51 policy-api | [2024-04-24T08:58:32.409+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@1f0b3cfe 09:00:51 zookeeper | [2024-04-24 08:58:18,649] INFO (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 simulator | 2024-04-24 08:58:15,749 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4876 ms. 09:00:51 policy-pap | [2024-04-24T08:58:48.508+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 09:00:51 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 09:00:51 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 09:00:51 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.687288888Z level=info msg="Executing migration" id="copy data_source v1 to v2" 09:00:51 policy-api | [2024-04-24T08:58:32.410+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 09:00:51 zookeeper | [2024-04-24 08:58:18,651] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 simulator | 2024-04-24 08:58:15,750 INFO org.onap.policy.models.simulators starting VFC simulator 09:00:51 mariadb | 2024-04-24 8:58:14 0 [Note] InnoDB: Completed initialization of buffer pool 09:00:51 policy-pap | [2024-04-24T08:58:48.629+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 09:00:51 policy-apex-pdp | ssl.cipher.suites = null 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | zookeeper.ssl.keystore.location = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.687622324Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=333.136µs 09:00:51 policy-api | [2024-04-24T08:58:34.497+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 09:00:51 zookeeper | [2024-04-24 08:58:18,651] INFO Server environment:host.name=3a03b3ec39eb (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 simulator | 2024-04-24 08:58:15,753 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 09:00:51 mariadb | 2024-04-24 8:58:14 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 09:00:51 policy-pap | [2024-04-24T08:58:48.894+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 09:00:51 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 09:00:51 kafka | zookeeper.ssl.keystore.password = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.689885172Z level=info msg="Executing migration" id="Drop old table user_v1" 09:00:51 policy-api | [2024-04-24T08:58:34.500+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 09:00:51 zookeeper | [2024-04-24 08:58:18,651] INFO Server environment:java.version=11.0.22 (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 simulator | 2024-04-24 08:58:15,753 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 09:00:51 mariadb | 2024-04-24 8:58:14 0 [Note] InnoDB: 128 rollback segments are active. 09:00:51 policy-pap | allow.auto.create.topics = true 09:00:51 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | zookeeper.ssl.keystore.type = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.690436732Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=551.41µs 09:00:51 policy-api | [2024-04-24T08:58:35.559+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 09:00:51 zookeeper | [2024-04-24 08:58:18,651] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 simulator | 2024-04-24 08:58:15,754 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 09:00:51 mariadb | 2024-04-24 8:58:14 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 09:00:51 policy-pap | auto.commit.interval.ms = 5000 09:00:51 policy-apex-pdp | ssl.engine.factory.class = null 09:00:51 policy-db-migrator | 09:00:51 kafka | zookeeper.ssl.ocsp.enable = false 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.695478357Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 09:00:51 policy-api | [2024-04-24T08:58:36.348+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 09:00:51 zookeeper | [2024-04-24 08:58:18,651] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 simulator | 2024-04-24 08:58:15,755 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 09:00:51 mariadb | 2024-04-24 8:58:14 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 09:00:51 policy-pap | auto.include.jmx.reporter = true 09:00:51 policy-apex-pdp | ssl.key.password = null 09:00:51 policy-db-migrator | 09:00:51 kafka | zookeeper.ssl.protocol = TLSv1.2 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.696602416Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.122499ms 09:00:51 policy-api | [2024-04-24T08:58:37.404+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 09:00:51 simulator | 2024-04-24 08:58:15,767 INFO Session workerName=node0 09:00:51 zookeeper | [2024-04-24 08:58:18,651] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 mariadb | 2024-04-24 8:58:14 0 [Note] InnoDB: log sequence number 46590; transaction id 14 09:00:51 policy-pap | auto.offset.reset = latest 09:00:51 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 09:00:51 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 09:00:51 kafka | zookeeper.ssl.truststore.location = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.699694668Z level=info msg="Executing migration" id="Update user table charset" 09:00:51 policy-api | [2024-04-24T08:58:37.652+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@58a0b88e, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@1b404a21, org.springframework.security.web.context.SecurityContextHolderFilter@3c6c7782, org.springframework.security.web.header.HeaderWriterFilter@1cdb4bd3, org.springframework.security.web.authentication.logout.LogoutFilter@452d71e5, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@66a2bc61, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@739e76e6, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@27153ba2, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@280aa1bd, org.springframework.security.web.access.ExceptionTranslationFilter@782b12c9, org.springframework.security.web.access.intercept.AuthorizationFilter@23639e5] 09:00:51 simulator | 2024-04-24 08:58:15,831 INFO Using GSON for REST calls 09:00:51 zookeeper | [2024-04-24 08:58:18,651] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 mariadb | 2024-04-24 8:58:14 0 [Note] Plugin 'FEEDBACK' is disabled. 09:00:51 policy-pap | bootstrap.servers = [kafka:9092] 09:00:51 policy-apex-pdp | ssl.keystore.certificate.chain = null 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | zookeeper.ssl.truststore.password = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.699724669Z level=info msg="Migration successfully executed" id="Update user table charset" duration=27.75µs 09:00:51 policy-api | [2024-04-24T08:58:38.506+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 09:00:51 simulator | 2024-04-24 08:58:15,840 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} 09:00:51 zookeeper | [2024-04-24 08:58:18,651] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 mariadb | 2024-04-24 8:58:14 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 09:00:51 policy-pap | check.crcs = true 09:00:51 policy-apex-pdp | ssl.keystore.key = null 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 09:00:51 kafka | zookeeper.ssl.truststore.type = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.702219291Z level=info msg="Executing migration" id="Add last_seen_at column to user" 09:00:51 simulator | 2024-04-24 08:58:15,841 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 09:00:51 zookeeper | [2024-04-24 08:58:18,651] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 mariadb | 2024-04-24 8:58:14 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 09:00:51 policy-pap | client.dns.lookup = use_all_dns_ips 09:00:51 policy-apex-pdp | ssl.keystore.location = null 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | (kafka.server.KafkaConfig) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.70334063Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.123769ms 09:00:51 simulator | 2024-04-24 08:58:15,841 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @2089ms 09:00:51 simulator | 2024-04-24 08:58:15,841 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4913 ms. 09:00:51 mariadb | 2024-04-24 8:58:14 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 09:00:51 policy-pap | client.id = consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-1 09:00:51 policy-apex-pdp | ssl.keystore.password = null 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:22,609] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.706529674Z level=info msg="Executing migration" id="Add missing user data" 09:00:51 zookeeper | [2024-04-24 08:58:18,651] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 mariadb | 2024-04-24 8:58:14 0 [Note] mariadbd: ready for connections. 09:00:51 policy-pap | client.rack = 09:00:51 policy-apex-pdp | ssl.keystore.type = JKS 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:22,609] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.706860619Z level=info msg="Migration successfully executed" id="Add missing user data" duration=330.765µs 09:00:51 policy-api | [2024-04-24T08:58:38.603+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 09:00:51 simulator | 2024-04-24 08:58:15,842 INFO org.onap.policy.models.simulators started 09:00:51 zookeeper | [2024-04-24 08:58:18,652] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 09:00:51 policy-pap | connections.max.idle.ms = 540000 09:00:51 policy-apex-pdp | ssl.protocol = TLSv1.3 09:00:51 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 09:00:51 kafka | [2024-04-24 08:58:22,614] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.735703738Z level=info msg="Executing migration" id="Add is_disabled column to user" 09:00:51 policy-api | [2024-04-24T08:58:38.624+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 09:00:51 zookeeper | [2024-04-24 08:58:18,652] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 mariadb | 2024-04-24 08:58:15+00:00 [Note] [Entrypoint]: Temporary server started. 09:00:51 policy-pap | default.api.timeout.ms = 60000 09:00:51 policy-apex-pdp | ssl.provider = null 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:22,617] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 09:00:51 policy-api | [2024-04-24T08:58:38.642+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.993 seconds (process running for 11.577) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.737297535Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.593237ms 09:00:51 zookeeper | [2024-04-24 08:58:18,652] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 mariadb | 2024-04-24 08:58:17+00:00 [Note] [Entrypoint]: Creating user policy_user 09:00:51 policy-pap | enable.auto.commit = true 09:00:51 policy-apex-pdp | ssl.secure.random.implementation = null 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 09:00:51 kafka | [2024-04-24 08:58:22,648] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 09:00:51 policy-api | [2024-04-24T08:58:39.932+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.740488758Z level=info msg="Executing migration" id="Add index user.login/user.email" 09:00:51 zookeeper | [2024-04-24 08:58:18,652] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 mariadb | 2024-04-24 08:58:17+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 09:00:51 policy-pap | exclude.internal.topics = true 09:00:51 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:22,655] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 09:00:51 policy-api | [2024-04-24T08:58:39.933+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.741381624Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=893.046µs 09:00:51 zookeeper | [2024-04-24 08:58:18,652] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 mariadb | 09:00:51 policy-pap | fetch.max.bytes = 52428800 09:00:51 policy-apex-pdp | ssl.truststore.certificates = null 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:22,665] INFO Loaded 0 logs in 17ms (kafka.log.LogManager) 09:00:51 policy-api | [2024-04-24T08:58:39.934+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.744664039Z level=info msg="Executing migration" id="Add is_service_account column to user" 09:00:51 zookeeper | [2024-04-24 08:58:18,652] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 mariadb | 09:00:51 policy-pap | fetch.max.wait.ms = 500 09:00:51 policy-apex-pdp | ssl.truststore.location = null 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:22,666] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 09:00:51 policy-api | [2024-04-24T08:58:58.038+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.74591859Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.253991ms 09:00:51 zookeeper | [2024-04-24 08:58:18,652] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 mariadb | 2024-04-24 08:58:17+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 09:00:51 policy-pap | fetch.min.bytes = 1 09:00:51 policy-apex-pdp | ssl.truststore.password = null 09:00:51 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 09:00:51 kafka | [2024-04-24 08:58:22,667] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 09:00:51 policy-api | [] 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.748992533Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 09:00:51 zookeeper | [2024-04-24 08:58:18,652] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 mariadb | 2024-04-24 08:58:17+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 09:00:51 policy-pap | group.id = c2598a93-7b5f-4e4e-b23a-b864ffd9a18a 09:00:51 policy-apex-pdp | ssl.truststore.type = JKS 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:22,677] INFO Starting the log cleaner (kafka.log.LogCleaner) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.758023385Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=9.030462ms 09:00:51 zookeeper | [2024-04-24 08:58:18,652] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 policy-pap | group.instance.id = null 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 09:00:51 kafka | [2024-04-24 08:58:22,721] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.762813136Z level=info msg="Executing migration" id="Add uid column to user" 09:00:51 zookeeper | [2024-04-24 08:58:18,652] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:00:51 mariadb | #!/bin/bash -xv 09:00:51 policy-pap | heartbeat.interval.ms = 3000 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:22,735] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.764260371Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.446445ms 09:00:51 zookeeper | [2024-04-24 08:58:18,652] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 policy-apex-pdp | 09:00:51 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 09:00:51 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:22,747] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.767525426Z level=info msg="Executing migration" id="Update uid column values for users" 09:00:51 zookeeper | [2024-04-24 08:58:18,652] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.284+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 09:00:51 policy-pap | interceptor.classes = [] 09:00:51 mariadb | # 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:22,783] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.76777478Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=249.144µs 09:00:51 zookeeper | [2024-04-24 08:58:18,652] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.285+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 09:00:51 policy-pap | internal.leave.group.on.close = true 09:00:51 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 09:00:51 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 09:00:51 kafka | [2024-04-24 08:58:23,089] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.770148151Z level=info msg="Executing migration" id="Add unique index user_uid" 09:00:51 zookeeper | [2024-04-24 08:58:18,652] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.285+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713949132283 09:00:51 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 09:00:51 mariadb | # you may not use this file except in compliance with the License. 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:23,109] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.770905944Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=757.513µs 09:00:51 zookeeper | [2024-04-24 08:58:18,652] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.287+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-1, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Subscribed to topic(s): policy-pdp-pap 09:00:51 policy-pap | isolation.level = read_uncommitted 09:00:51 mariadb | # You may obtain a copy of the License at 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 09:00:51 kafka | [2024-04-24 08:58:23,109] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.774170689Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 09:00:51 zookeeper | [2024-04-24 08:58:18,653] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.300+00:00|INFO|ServiceManager|main] service manager starting 09:00:51 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:00:51 mariadb | # 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:23,114] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.774515735Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=347.557µs 09:00:51 zookeeper | [2024-04-24 08:58:18,654] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.301+00:00|INFO|ServiceManager|main] service manager starting topics 09:00:51 policy-pap | max.partition.fetch.bytes = 1048576 09:00:51 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.779282765Z level=info msg="Executing migration" id="create temp user table v1-7" 09:00:51 kafka | [2024-04-24 08:58:23,118] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 09:00:51 zookeeper | [2024-04-24 08:58:18,654] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.303+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=6c14929a-34c8-48a0-adf2-d542a07b4ce8, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 09:00:51 policy-pap | max.poll.interval.ms = 300000 09:00:51 mariadb | # 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.780108629Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=825.084µs 09:00:51 kafka | [2024-04-24 08:58:23,141] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:00:51 zookeeper | [2024-04-24 08:58:18,655] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.322+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 09:00:51 policy-pap | max.poll.records = 500 09:00:51 mariadb | # Unless required by applicable law or agreed to in writing, software 09:00:51 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.783267512Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 09:00:51 kafka | [2024-04-24 08:58:23,142] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:00:51 zookeeper | [2024-04-24 08:58:18,655] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 09:00:51 policy-apex-pdp | allow.auto.create.topics = true 09:00:51 policy-pap | metadata.max.age.ms = 300000 09:00:51 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.784013436Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=744.174µs 09:00:51 kafka | [2024-04-24 08:58:23,145] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:00:51 zookeeper | [2024-04-24 08:58:18,656] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 09:00:51 policy-apex-pdp | auto.commit.interval.ms = 5000 09:00:51 policy-pap | metric.reporters = [] 09:00:51 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.787202629Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 09:00:51 kafka | [2024-04-24 08:58:23,147] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:00:51 zookeeper | [2024-04-24 08:58:18,656] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 09:00:51 policy-apex-pdp | auto.include.jmx.reporter = true 09:00:51 policy-pap | metrics.num.samples = 2 09:00:51 mariadb | # See the License for the specific language governing permissions and 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.788044013Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=841.574µs 09:00:51 kafka | [2024-04-24 08:58:23,148] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:00:51 zookeeper | [2024-04-24 08:58:18,656] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 09:00:51 policy-apex-pdp | auto.offset.reset = latest 09:00:51 policy-pap | metrics.recording.level = INFO 09:00:51 mariadb | # limitations under the License. 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.793184601Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 09:00:51 kafka | [2024-04-24 08:58:23,161] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 09:00:51 zookeeper | [2024-04-24 08:58:18,656] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 09:00:51 policy-apex-pdp | bootstrap.servers = [kafka:9092] 09:00:51 policy-pap | metrics.sample.window.ms = 30000 09:00:51 mariadb | 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.793952714Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=767.783µs 09:00:51 kafka | [2024-04-24 08:58:23,162] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 09:00:51 policy-apex-pdp | check.crcs = true 09:00:51 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 09:00:51 zookeeper | [2024-04-24 08:58:18,657] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 09:00:51 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 09:00:51 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.796912174Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 09:00:51 kafka | [2024-04-24 08:58:23,181] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 09:00:51 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 09:00:51 policy-pap | receive.buffer.bytes = 65536 09:00:51 zookeeper | [2024-04-24 08:58:18,657] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 09:00:51 mariadb | do 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.797683637Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=771.343µs 09:00:51 kafka | [2024-04-24 08:58:23,206] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1713949103193,1713949103193,1,0,0,72057610827661313,258,0,27 09:00:51 policy-apex-pdp | client.id = consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2 09:00:51 policy-pap | reconnect.backoff.max.ms = 1000 09:00:51 zookeeper | [2024-04-24 08:58:18,659] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.801732625Z level=info msg="Executing migration" id="Update temp_user table charset" 09:00:51 kafka | (kafka.zk.KafkaZkClient) 09:00:51 policy-apex-pdp | client.rack = 09:00:51 policy-pap | reconnect.backoff.ms = 50 09:00:51 zookeeper | [2024-04-24 08:58:18,659] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.801759006Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=27.471µs 09:00:51 kafka | [2024-04-24 08:58:23,207] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 09:00:51 zookeeper | [2024-04-24 08:58:18,660] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 09:00:51 policy-apex-pdp | connections.max.idle.ms = 540000 09:00:51 mariadb | done 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.806645918Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 09:00:51 kafka | [2024-04-24 08:58:23,258] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 09:00:51 policy-pap | request.timeout.ms = 30000 09:00:51 zookeeper | [2024-04-24 08:58:18,660] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 09:00:51 policy-apex-pdp | default.api.timeout.ms = 60000 09:00:51 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.807782398Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.13719ms 09:00:51 kafka | [2024-04-24 08:58:23,264] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:00:51 policy-pap | retry.backoff.ms = 100 09:00:51 zookeeper | [2024-04-24 08:58:18,660] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 policy-apex-pdp | enable.auto.commit = true 09:00:51 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 09:00:51 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.811128514Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 09:00:51 kafka | [2024-04-24 08:58:23,270] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:00:51 policy-pap | sasl.client.callback.handler.class = null 09:00:51 zookeeper | [2024-04-24 08:58:18,681] INFO Logging initialized @586ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 09:00:51 policy-apex-pdp | exclude.internal.topics = true 09:00:51 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.812322484Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.19445ms 09:00:51 kafka | [2024-04-24 08:58:23,271] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:00:51 policy-pap | sasl.jaas.config = null 09:00:51 zookeeper | [2024-04-24 08:58:18,785] WARN o.e.j.s.ServletContextHandler@311bf055{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 09:00:51 policy-apex-pdp | fetch.max.bytes = 52428800 09:00:51 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.815874025Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 09:00:51 kafka | [2024-04-24 08:58:23,279] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 09:00:51 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:00:51 zookeeper | [2024-04-24 08:58:18,785] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 09:00:51 policy-apex-pdp | fetch.max.wait.ms = 500 09:00:51 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.81737906Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.504886ms 09:00:51 kafka | [2024-04-24 08:58:23,284] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 09:00:51 zookeeper | [2024-04-24 08:58:18,813] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 11.0.22+7-LTS (org.eclipse.jetty.server.Server) 09:00:51 policy-apex-pdp | fetch.min.bytes = 1 09:00:51 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.823326451Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 09:00:51 kafka | [2024-04-24 08:58:23,288] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 09:00:51 policy-pap | sasl.kerberos.service.name = null 09:00:51 zookeeper | [2024-04-24 08:58:18,849] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 09:00:51 policy-apex-pdp | group.id = 6c14929a-34c8-48a0-adf2-d542a07b4ce8 09:00:51 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.824597643Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.275032ms 09:00:51 kafka | [2024-04-24 08:58:23,289] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 09:00:51 zookeeper | [2024-04-24 08:58:18,849] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 09:00:51 policy-apex-pdp | group.instance.id = null 09:00:51 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 09:00:51 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.827854697Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 09:00:51 kafka | [2024-04-24 08:58:23,292] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 09:00:51 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 09:00:51 zookeeper | [2024-04-24 08:58:18,850] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) 09:00:51 policy-apex-pdp | heartbeat.interval.ms = 3000 09:00:51 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.833021235Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=5.166158ms 09:00:51 kafka | [2024-04-24 08:58:23,295] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 09:00:51 policy-pap | sasl.login.callback.handler.class = null 09:00:51 zookeeper | [2024-04-24 08:58:18,856] WARN ServletContext@o.e.j.s.ServletContextHandler@311bf055{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 09:00:51 policy-apex-pdp | interceptor.classes = [] 09:00:51 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.836369681Z level=info msg="Executing migration" id="create temp_user v2" 09:00:51 kafka | [2024-04-24 08:58:23,306] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 09:00:51 policy-pap | sasl.login.class = null 09:00:51 zookeeper | [2024-04-24 08:58:18,866] INFO Started o.e.j.s.ServletContextHandler@311bf055{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 09:00:51 policy-apex-pdp | internal.leave.group.on.close = true 09:00:51 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.837503261Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.13275ms 09:00:51 kafka | [2024-04-24 08:58:23,309] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 09:00:51 policy-pap | sasl.login.connect.timeout.ms = null 09:00:51 zookeeper | [2024-04-24 08:58:18,877] INFO Started ServerConnector@6f53b8a{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 09:00:51 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 09:00:51 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.842861462Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 09:00:51 kafka | [2024-04-24 08:58:23,309] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 09:00:51 policy-pap | sasl.login.read.timeout.ms = null 09:00:51 zookeeper | [2024-04-24 08:58:18,878] INFO Started @783ms (org.eclipse.jetty.server.Server) 09:00:51 policy-apex-pdp | isolation.level = read_uncommitted 09:00:51 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.843617684Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=757.442µs 09:00:51 kafka | [2024-04-24 08:58:23,320] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 09:00:51 policy-pap | sasl.login.refresh.buffer.seconds = 300 09:00:51 zookeeper | [2024-04-24 08:58:18,878] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 09:00:51 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:00:51 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 09:00:51 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.846449822Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 09:00:51 kafka | [2024-04-24 08:58:23,320] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 09:00:51 policy-pap | sasl.login.refresh.min.period.seconds = 60 09:00:51 zookeeper | [2024-04-24 08:58:18,881] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 09:00:51 policy-apex-pdp | max.partition.fetch.bytes = 1048576 09:00:51 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.847167464Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=716.662µs 09:00:51 kafka | [2024-04-24 08:58:23,326] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 09:00:51 policy-pap | sasl.login.refresh.window.factor = 0.8 09:00:51 zookeeper | [2024-04-24 08:58:18,882] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 09:00:51 policy-apex-pdp | max.poll.interval.ms = 300000 09:00:51 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.850019893Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 09:00:51 kafka | [2024-04-24 08:58:23,330] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 09:00:51 zookeeper | [2024-04-24 08:58:18,884] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 09:00:51 policy-apex-pdp | max.poll.records = 500 09:00:51 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.850723625Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=703.912µs 09:00:51 kafka | [2024-04-24 08:58:23,333] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 09:00:51 policy-pap | sasl.login.refresh.window.jitter = 0.05 09:00:51 zookeeper | [2024-04-24 08:58:18,885] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 09:00:51 policy-apex-pdp | metadata.max.age.ms = 300000 09:00:51 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.856987291Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 09:00:51 kafka | [2024-04-24 08:58:23,340] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:00:51 policy-pap | sasl.login.retry.backoff.max.ms = 10000 09:00:51 zookeeper | [2024-04-24 08:58:18,899] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 09:00:51 policy-apex-pdp | metric.reporters = [] 09:00:51 mariadb | 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.857810684Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=822.333µs 09:00:51 kafka | [2024-04-24 08:58:23,352] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 09:00:51 policy-pap | sasl.login.retry.backoff.ms = 100 09:00:51 zookeeper | [2024-04-24 08:58:18,899] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 09:00:51 policy-apex-pdp | metrics.num.samples = 2 09:00:51 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 09:00:51 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.869711076Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 09:00:51 kafka | [2024-04-24 08:58:23,357] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 09:00:51 policy-pap | sasl.mechanism = GSSAPI 09:00:51 zookeeper | [2024-04-24 08:58:18,901] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 09:00:51 policy-apex-pdp | metrics.recording.level = INFO 09:00:51 policy-db-migrator | -------------- 09:00:51 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.870970167Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=1.258701ms 09:00:51 kafka | [2024-04-24 08:58:23,362] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 09:00:51 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 09:00:51 zookeeper | [2024-04-24 08:58:18,901] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 09:00:51 policy-apex-pdp | metrics.sample.window.ms = 30000 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 09:00:51 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.874355485Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 09:00:51 kafka | [2024-04-24 08:58:23,364] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 09:00:51 policy-pap | sasl.oauthbearer.expected.audience = null 09:00:51 zookeeper | [2024-04-24 08:58:18,906] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 09:00:51 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 09:00:51 policy-db-migrator | -------------- 09:00:51 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.875317251Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=961.146µs 09:00:51 kafka | [2024-04-24 08:58:23,404] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 09:00:51 zookeeper | [2024-04-24 08:58:18,906] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 09:00:51 policy-apex-pdp | receive.buffer.bytes = 65536 09:00:51 policy-pap | sasl.oauthbearer.expected.issuer = null 09:00:51 policy-db-migrator | 09:00:51 mariadb | 09:00:51 kafka | [2024-04-24 08:58:23,406] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 09:00:51 zookeeper | [2024-04-24 08:58:18,909] INFO Snapshot loaded in 9 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.879472231Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 09:00:51 policy-apex-pdp | reconnect.backoff.max.ms = 1000 09:00:51 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:00:51 policy-db-migrator | 09:00:51 mariadb | 2024-04-24 08:58:18+00:00 [Note] [Entrypoint]: Stopping temporary server 09:00:51 kafka | [2024-04-24 08:58:23,406] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 09:00:51 zookeeper | [2024-04-24 08:58:18,910] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.880154143Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=681.902µs 09:00:51 policy-apex-pdp | reconnect.backoff.ms = 50 09:00:51 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:00:51 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 09:00:51 kafka | [2024-04-24 08:58:23,406] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 09:00:51 zookeeper | [2024-04-24 08:58:18,911] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.883397018Z level=info msg="Executing migration" id="create star table" 09:00:51 policy-apex-pdp | request.timeout.ms = 30000 09:00:51 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:00:51 policy-db-migrator | -------------- 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: FTS optimize thread exiting. 09:00:51 kafka | [2024-04-24 08:58:23,407] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 09:00:51 zookeeper | [2024-04-24 08:58:18,920] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.88411827Z level=info msg="Migration successfully executed" id="create star table" duration=720.672µs 09:00:51 policy-apex-pdp | retry.backoff.ms = 100 09:00:51 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Starting shutdown... 09:00:51 kafka | [2024-04-24 08:58:23,411] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 09:00:51 zookeeper | [2024-04-24 08:58:18,921] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.889513451Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 09:00:51 policy-apex-pdp | sasl.client.callback.handler.class = null 09:00:51 policy-pap | sasl.oauthbearer.scope.claim.name = scope 09:00:51 policy-db-migrator | -------------- 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 09:00:51 kafka | [2024-04-24 08:58:23,412] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 09:00:51 zookeeper | [2024-04-24 08:58:18,935] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.890327824Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=819.013µs 09:00:51 policy-apex-pdp | sasl.jaas.config = null 09:00:51 policy-pap | sasl.oauthbearer.sub.claim.name = sub 09:00:51 policy-db-migrator | 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Buffer pool(s) dump completed at 240424 8:58:18 09:00:51 kafka | [2024-04-24 08:58:23,412] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 09:00:51 zookeeper | [2024-04-24 08:58:18,936] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.89475801Z level=info msg="Executing migration" id="create org table v1" 09:00:51 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:00:51 policy-pap | sasl.oauthbearer.token.endpoint.url = null 09:00:51 policy-db-migrator | 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 09:00:51 kafka | [2024-04-24 08:58:23,412] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 09:00:51 zookeeper | [2024-04-24 08:58:21,055] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.895464532Z level=info msg="Migration successfully executed" id="create org table v1" duration=699.662µs 09:00:51 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 09:00:51 policy-pap | security.protocol = PLAINTEXT 09:00:51 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Shutdown completed; log sequence number 328945; transaction id 298 09:00:51 kafka | [2024-04-24 08:58:23,414] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.898430482Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 09:00:51 policy-apex-pdp | sasl.kerberos.service.name = null 09:00:51 policy-pap | security.providers = null 09:00:51 policy-db-migrator | -------------- 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] mariadbd: Shutdown complete 09:00:51 kafka | [2024-04-24 08:58:23,417] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.899875726Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.445074ms 09:00:51 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 09:00:51 policy-pap | send.buffer.bytes = 131072 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 09:00:51 mariadb | 09:00:51 kafka | [2024-04-24 08:58:23,421] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.903506368Z level=info msg="Executing migration" id="create org_user table v1" 09:00:51 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 09:00:51 policy-pap | session.timeout.ms = 45000 09:00:51 policy-db-migrator | -------------- 09:00:51 mariadb | 2024-04-24 08:58:18+00:00 [Note] [Entrypoint]: Temporary server stopped 09:00:51 kafka | [2024-04-24 08:58:23,424] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.904488085Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=983.348µs 09:00:51 policy-apex-pdp | sasl.login.callback.handler.class = null 09:00:51 policy-pap | socket.connection.setup.timeout.max.ms = 30000 09:00:51 policy-db-migrator | 09:00:51 mariadb | 09:00:51 kafka | [2024-04-24 08:58:23,427] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.909596341Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 09:00:51 policy-apex-pdp | sasl.login.class = null 09:00:51 policy-pap | socket.connection.setup.timeout.ms = 10000 09:00:51 policy-db-migrator | 09:00:51 mariadb | 2024-04-24 08:58:18+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 09:00:51 kafka | [2024-04-24 08:58:23,428] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.910313793Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=717.652µs 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.913287443Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 09:00:51 policy-apex-pdp | sasl.login.connect.timeout.ms = null 09:00:51 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 09:00:51 mariadb | 09:00:51 kafka | [2024-04-24 08:58:23,428] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.913945084Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=657.091µs 09:00:51 policy-apex-pdp | sasl.login.read.timeout.ms = null 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:23,431] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.919065691Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 09:00:51 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 09:00:51 kafka | [2024-04-24 08:58:23,432] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.920470414Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.408763ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.92376167Z level=info msg="Executing migration" id="Update org table charset" 09:00:51 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:23,433] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 09:00:51 policy-pap | ssl.cipher.suites = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.923806111Z level=info msg="Migration successfully executed" id="Update org table charset" duration=45.511µs 09:00:51 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Number of transaction pools: 1 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:23,434] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 09:00:51 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.934678966Z level=info msg="Executing migration" id="Update org_user table charset" 09:00:51 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 09:00:51 policy-db-migrator | 09:00:51 policy-pap | ssl.endpoint.identification.algorithm = https 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.934711846Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=27.28µs 09:00:51 kafka | [2024-04-24 08:58:23,438] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 09:00:51 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 09:00:51 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 09:00:51 policy-pap | ssl.engine.factory.class = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.939558878Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 09:00:51 kafka | [2024-04-24 08:58:23,438] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 09:00:51 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | ssl.key.password = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.939729641Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=171.033µs 09:00:51 kafka | [2024-04-24 08:58:23,439] INFO Kafka version: 7.6.1-ccs (org.apache.kafka.common.utils.AppInfoParser) 09:00:51 policy-apex-pdp | sasl.mechanism = GSSAPI 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 09:00:51 policy-pap | ssl.keymanager.algorithm = SunX509 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.94380457Z level=info msg="Executing migration" id="create dashboard table" 09:00:51 kafka | [2024-04-24 08:58:23,439] INFO Kafka commitId: 11e81ad2a49db00b1d2b8c731409cd09e563de67 (org.apache.kafka.common.utils.AppInfoParser) 09:00:51 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | ssl.keystore.certificate.chain = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.944974009Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.168709ms 09:00:51 kafka | [2024-04-24 08:58:23,439] INFO Kafka startTimeMs: 1713949103434 (org.apache.kafka.common.utils.AppInfoParser) 09:00:51 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Completed initialization of buffer pool 09:00:51 policy-db-migrator | 09:00:51 policy-pap | ssl.keystore.key = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.94790509Z level=info msg="Executing migration" id="add index dashboard.account_id" 09:00:51 kafka | [2024-04-24 08:58:23,440] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 09:00:51 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 09:00:51 policy-pap | ssl.keystore.location = null 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.949185Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.27889ms 09:00:51 kafka | [2024-04-24 08:58:23,441] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 09:00:51 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: 128 rollback segments are active. 09:00:51 policy-pap | ssl.keystore.password = null 09:00:51 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.952327794Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 09:00:51 kafka | [2024-04-24 08:58:23,445] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 09:00:51 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 09:00:51 policy-pap | ssl.keystore.type = JKS 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.953116207Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=788.113µs 09:00:51 kafka | [2024-04-24 08:58:23,446] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 09:00:51 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 09:00:51 policy-pap | ssl.protocol = TLSv1.3 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.956000176Z level=info msg="Executing migration" id="create dashboard_tag table" 09:00:51 kafka | [2024-04-24 08:58:23,446] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 09:00:51 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: log sequence number 328945; transaction id 299 09:00:51 policy-pap | ssl.provider = null 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.956602796Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=602.66µs 09:00:51 kafka | [2024-04-24 08:58:23,446] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 09:00:51 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] Plugin 'FEEDBACK' is disabled. 09:00:51 policy-pap | ssl.secure.random.implementation = null 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.962952414Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 09:00:51 kafka | [2024-04-24 08:58:23,447] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 09:00:51 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 09:00:51 policy-pap | ssl.trustmanager.algorithm = PKIX 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.964571061Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.617687ms 09:00:51 kafka | [2024-04-24 08:58:23,489] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 09:00:51 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 09:00:51 policy-pap | ssl.truststore.certificates = null 09:00:51 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.969927372Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 09:00:51 kafka | [2024-04-24 08:58:23,546] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 09:00:51 policy-apex-pdp | security.protocol = PLAINTEXT 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 09:00:51 policy-pap | ssl.truststore.location = null 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.97096133Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.033848ms 09:00:51 kafka | [2024-04-24 08:58:23,546] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 09:00:51 policy-apex-pdp | security.providers = null 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] Server socket created on IP: '0.0.0.0'. 09:00:51 policy-pap | ssl.truststore.password = null 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.975933364Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 09:00:51 kafka | [2024-04-24 08:58:23,578] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] Server socket created on IP: '::'. 09:00:51 policy-pap | ssl.truststore.type = JKS 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.982065017Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=6.132273ms 09:00:51 policy-apex-pdp | send.buffer.bytes = 131072 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] mariadbd: ready for connections. 09:00:51 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.987392238Z level=info msg="Executing migration" id="create dashboard v2" 09:00:51 policy-apex-pdp | session.timeout.ms = 45000 09:00:51 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.988242542Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=850.535µs 09:00:51 kafka | [2024-04-24 08:58:28,580] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 09:00:51 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.990700234Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 09:00:51 kafka | [2024-04-24 08:58:28,580] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 09:00:51 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 09:00:51 policy-pap | 09:00:51 policy-apex-pdp | ssl.cipher.suites = null 09:00:51 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.991393546Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=694.111µs 09:00:51 kafka | [2024-04-24 08:58:51,135] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 09:00:51 mariadb | 2024-04-24 8:58:18 0 [Note] InnoDB: Buffer pool(s) load completed at 240424 8:58:18 09:00:51 policy-pap | [2024-04-24T08:58:49.057+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 09:00:51 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.9963784Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 09:00:51 kafka | [2024-04-24 08:58:51,135] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 09:00:51 mariadb | 2024-04-24 8:58:19 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) 09:00:51 policy-pap | [2024-04-24T08:58:49.057+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 09:00:51 policy-pap | [2024-04-24T08:58:49.057+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713949129055 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:11.997238324Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=860.424µs 09:00:51 kafka | [2024-04-24 08:58:51,139] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 09:00:51 mariadb | 2024-04-24 8:58:19 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 09:00:51 policy-pap | [2024-04-24T08:58:49.059+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-1, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Subscribed to topic(s): policy-pdp-pap 09:00:51 policy-pap | [2024-04-24T08:58:49.060+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.004176152Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 09:00:51 kafka | [2024-04-24 08:58:51,142] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 09:00:51 mariadb | 2024-04-24 8:58:19 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 09:00:51 policy-pap | allow.auto.create.topics = true 09:00:51 policy-pap | auto.commit.interval.ms = 5000 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.004513298Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=338.396µs 09:00:51 mariadb | 2024-04-24 8:58:19 6 [Warning] Aborted connection 6 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) 09:00:51 kafka | [2024-04-24 08:58:51,169] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(UfYjnzzkRPeYang4gRgPIg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(3d7pexomSuav55xzl5U12w),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 09:00:51 policy-pap | auto.include.jmx.reporter = true 09:00:51 policy-pap | auto.offset.reset = latest 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.009204439Z level=info msg="Executing migration" id="drop table dashboard_v1" 09:00:51 kafka | [2024-04-24 08:58:51,170] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 09:00:51 policy-pap | bootstrap.servers = [kafka:9092] 09:00:51 policy-pap | check.crcs = true 09:00:51 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.010040016Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=835.477µs 09:00:51 kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 policy-pap | client.dns.lookup = use_all_dns_ips 09:00:51 policy-pap | client.id = consumer-policy-pap-2 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.015211385Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 09:00:51 kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 policy-pap | client.rack = 09:00:51 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.015309897Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=99.031µs 09:00:51 kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 policy-pap | connections.max.idle.ms = 540000 09:00:51 policy-apex-pdp | ssl.engine.factory.class = null 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.019025648Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 09:00:51 kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 policy-pap | default.api.timeout.ms = 60000 09:00:51 policy-apex-pdp | ssl.key.password = null 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.021999254Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.973826ms 09:00:51 kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 policy-pap | enable.auto.commit = true 09:00:51 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.025603634Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 09:00:51 kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 policy-pap | exclude.internal.topics = true 09:00:51 policy-apex-pdp | ssl.keystore.certificate.chain = null 09:00:51 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 09:00:51 kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.027442938Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.839604ms 09:00:51 policy-pap | fetch.max.bytes = 52428800 09:00:51 policy-apex-pdp | ssl.keystore.key = null 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.030610159Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 09:00:51 policy-pap | fetch.max.wait.ms = 500 09:00:51 policy-apex-pdp | ssl.keystore.location = null 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 09:00:51 kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.032423563Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.813414ms 09:00:51 policy-pap | fetch.min.bytes = 1 09:00:51 policy-apex-pdp | ssl.keystore.password = null 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.037418659Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 09:00:51 policy-pap | group.id = policy-pap 09:00:51 policy-apex-pdp | ssl.keystore.type = JKS 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.038206694Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=788.275µs 09:00:51 policy-pap | group.instance.id = null 09:00:51 policy-apex-pdp | ssl.protocol = TLSv1.3 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.041457735Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 09:00:51 policy-pap | heartbeat.interval.ms = 3000 09:00:51 policy-apex-pdp | ssl.provider = null 09:00:51 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 09:00:51 kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.044135667Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.674212ms 09:00:51 policy-pap | interceptor.classes = [] 09:00:51 policy-apex-pdp | ssl.secure.random.implementation = null 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.050975777Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 09:00:51 policy-pap | internal.leave.group.on.close = true 09:00:51 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 09:00:51 kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.051990546Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.016059ms 09:00:51 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 09:00:51 policy-apex-pdp | ssl.truststore.certificates = null 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,172] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.055068186Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 09:00:51 policy-pap | isolation.level = read_uncommitted 09:00:51 policy-apex-pdp | ssl.truststore.location = null 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.05631856Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.251214ms 09:00:51 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:00:51 policy-apex-pdp | ssl.truststore.password = null 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.06420606Z level=info msg="Executing migration" id="Update dashboard table charset" 09:00:51 policy-pap | max.partition.fetch.bytes = 1048576 09:00:51 policy-apex-pdp | ssl.truststore.type = JKS 09:00:51 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.064286681Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=80.221µs 09:00:51 policy-pap | max.poll.interval.ms = 300000 09:00:51 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.068008402Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 09:00:51 policy-pap | max.poll.records = 500 09:00:51 policy-apex-pdp | 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.068058923Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=48.781µs 09:00:51 policy-pap | metadata.max.age.ms = 300000 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.331+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.071493848Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 09:00:51 policy-pap | metric.reporters = [] 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.331+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.075022486Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.525337ms 09:00:51 policy-pap | metrics.num.samples = 2 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.331+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713949132331 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.081549701Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 09:00:51 policy-pap | metrics.recording.level = INFO 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.332+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Subscribed to topic(s): policy-pdp-pap 09:00:51 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.083799613Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.251562ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.089687735Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.332+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=6f6498b9-feed-4855-a99d-511b9662bd01, alive=false, publisher=null]]: starting 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 policy-pap | metrics.sample.window.ms = 30000 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.092191544Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.503718ms 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.345+00:00|INFO|ProducerConfig|main] ProducerConfig values: 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.131273659Z level=info msg="Executing migration" id="Add column uid in dashboard" 09:00:51 policy-apex-pdp | acks = -1 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 policy-pap | receive.buffer.bytes = 65536 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.137211302Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=5.983694ms 09:00:51 policy-apex-pdp | auto.include.jmx.reporter = true 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 policy-pap | reconnect.backoff.max.ms = 1000 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.141001194Z level=info msg="Executing migration" id="Update uid column values in dashboard" 09:00:51 policy-apex-pdp | batch.size = 16384 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 policy-pap | reconnect.backoff.ms = 50 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.14127771Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=277.156µs 09:00:51 policy-apex-pdp | bootstrap.servers = [kafka:9092] 09:00:51 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 policy-pap | request.timeout.ms = 30000 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.145644973Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 09:00:51 policy-apex-pdp | buffer.memory = 33554432 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 policy-pap | retry.backoff.ms = 100 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.146502999Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=857.046µs 09:00:51 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 policy-pap | sasl.client.callback.handler.class = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.151696278Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 09:00:51 policy-apex-pdp | client.id = producer-1 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 policy-pap | sasl.jaas.config = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.152382561Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=690.303µs 09:00:51 policy-apex-pdp | compression.type = none 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.155774796Z level=info msg="Executing migration" id="Update dashboard title length" 09:00:51 policy-apex-pdp | connections.max.idle.ms = 540000 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.155807057Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=33.021µs 09:00:51 policy-apex-pdp | delivery.timeout.ms = 120000 09:00:51 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | sasl.kerberos.service.name = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.163037584Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 09:00:51 policy-apex-pdp | enable.idempotence = true 09:00:51 policy-apex-pdp | interceptor.classes = [] 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.165112224Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=2.07862ms 09:00:51 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 09:00:51 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.16858704Z level=info msg="Executing migration" id="create dashboard_provisioning" 09:00:51 policy-apex-pdp | linger.ms = 0 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.169299114Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=712.224µs 09:00:51 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 09:00:51 policy-apex-pdp | max.block.ms = 60000 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.172805941Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 09:00:51 policy-pap | sasl.login.callback.handler.class = null 09:00:51 policy-apex-pdp | max.in.flight.requests.per.connection = 5 09:00:51 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.178110611Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.30252ms 09:00:51 policy-pap | sasl.login.class = null 09:00:51 policy-apex-pdp | max.request.size = 1048576 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.182681869Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 09:00:51 policy-pap | sasl.login.connect.timeout.ms = null 09:00:51 policy-apex-pdp | metadata.max.age.ms = 300000 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.183396333Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=714.784µs 09:00:51 policy-pap | sasl.login.read.timeout.ms = null 09:00:51 policy-apex-pdp | metadata.max.idle.ms = 300000 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.187538102Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 09:00:51 policy-pap | sasl.login.refresh.buffer.seconds = 300 09:00:51 policy-apex-pdp | metric.reporters = [] 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,173] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.188322747Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=782.714µs 09:00:51 policy-pap | sasl.login.refresh.min.period.seconds = 60 09:00:51 policy-apex-pdp | metrics.num.samples = 2 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,174] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.192363834Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 09:00:51 policy-pap | sasl.login.refresh.window.factor = 0.8 09:00:51 policy-apex-pdp | metrics.recording.level = INFO 09:00:51 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 09:00:51 kafka | [2024-04-24 08:58:51,174] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.193370343Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.006069ms 09:00:51 policy-pap | sasl.login.refresh.window.jitter = 0.05 09:00:51 policy-apex-pdp | metrics.sample.window.ms = 30000 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,174] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.19897894Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 09:00:51 policy-pap | sasl.login.retry.backoff.max.ms = 10000 09:00:51 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 09:00:51 kafka | [2024-04-24 08:58:51,174] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.199299906Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=320.876µs 09:00:51 policy-pap | sasl.login.retry.backoff.ms = 100 09:00:51 policy-apex-pdp | partitioner.availability.timeout.ms = 0 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,174] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.20318413Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 09:00:51 policy-pap | sasl.mechanism = GSSAPI 09:00:51 policy-apex-pdp | partitioner.class = null 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,174] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.203671869Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=490.709µs 09:00:51 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 09:00:51 policy-apex-pdp | partitioner.ignore.keys = false 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,174] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.207245237Z level=info msg="Executing migration" id="Add check_sum column" 09:00:51 policy-pap | sasl.oauthbearer.expected.audience = null 09:00:51 policy-apex-pdp | receive.buffer.bytes = 32768 09:00:51 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 09:00:51 kafka | [2024-04-24 08:58:51,174] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.210690693Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.447296ms 09:00:51 policy-pap | sasl.oauthbearer.expected.issuer = null 09:00:51 policy-apex-pdp | reconnect.backoff.max.ms = 1000 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,178] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.214810272Z level=info msg="Executing migration" id="Add index for dashboard_title" 09:00:51 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:00:51 policy-apex-pdp | reconnect.backoff.ms = 50 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 09:00:51 kafka | [2024-04-24 08:58:51,178] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.215562086Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=752.784µs 09:00:51 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:00:51 policy-apex-pdp | request.timeout.ms = 30000 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.220874438Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 09:00:51 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:00:51 policy-apex-pdp | retries = 2147483647 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.221102992Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=228.724µs 09:00:51 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 09:00:51 policy-apex-pdp | retry.backoff.ms = 100 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.224573248Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 09:00:51 policy-pap | sasl.oauthbearer.scope.claim.name = scope 09:00:51 policy-apex-pdp | sasl.client.callback.handler.class = null 09:00:51 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.224814962Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=242.904µs 09:00:51 policy-pap | sasl.oauthbearer.sub.claim.name = sub 09:00:51 policy-apex-pdp | sasl.jaas.config = null 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.228437051Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 09:00:51 policy-pap | sasl.oauthbearer.token.endpoint.url = null 09:00:51 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.229189766Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=752.715µs 09:00:51 policy-pap | security.protocol = PLAINTEXT 09:00:51 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.233388616Z level=info msg="Executing migration" id="Add isPublic for dashboard" 09:00:51 policy-pap | security.providers = null 09:00:51 policy-apex-pdp | sasl.kerberos.service.name = null 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.236633538Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.243072ms 09:00:51 policy-pap | send.buffer.bytes = 131072 09:00:51 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.243152002Z level=info msg="Executing migration" id="create data_source table" 09:00:51 policy-pap | session.timeout.ms = 45000 09:00:51 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 09:00:51 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.244018969Z level=info msg="Migration successfully executed" id="create data_source table" duration=867.847µs 09:00:51 policy-pap | socket.connection.setup.timeout.max.ms = 30000 09:00:51 policy-apex-pdp | sasl.login.callback.handler.class = null 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.254455678Z level=info msg="Executing migration" id="add index data_source.account_id" 09:00:51 policy-pap | socket.connection.setup.timeout.ms = 10000 09:00:51 policy-apex-pdp | sasl.login.class = null 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.25563959Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.182972ms 09:00:51 policy-pap | ssl.cipher.suites = null 09:00:51 policy-apex-pdp | sasl.login.connect.timeout.ms = null 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.261853239Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:00:51 policy-apex-pdp | sasl.login.read.timeout.ms = null 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.262730335Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=876.046µs 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | ssl.endpoint.identification.algorithm = https 09:00:51 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.266307343Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | ssl.engine.factory.class = null 09:00:51 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 09:00:51 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.267090929Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=783.746µs 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | ssl.key.password = null 09:00:51 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.272126385Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | ssl.keymanager.algorithm = SunX509 09:00:51 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.27291761Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=794.745µs 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | ssl.keystore.certificate.chain = null 09:00:51 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.278772331Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | ssl.keystore.key = null 09:00:51 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.287836174Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=9.064803ms 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | ssl.keystore.location = null 09:00:51 policy-apex-pdp | sasl.mechanism = GSSAPI 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.293585024Z level=info msg="Executing migration" id="create data_source table v2" 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | ssl.keystore.password = null 09:00:51 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 09:00:51 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.295158354Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.57362ms 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | ssl.keystore.type = JKS 09:00:51 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.299303434Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | ssl.protocol = TLSv1.3 09:00:51 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.30015533Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=853.327µs 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | ssl.provider = null 09:00:51 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.306032071Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | ssl.secure.random.implementation = null 09:00:51 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.30696869Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=936.399µs 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | ssl.trustmanager.algorithm = PKIX 09:00:51 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.31115555Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 09:00:51 kafka | [2024-04-24 08:58:51,179] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | ssl.truststore.certificates = null 09:00:51 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 09:00:51 policy-db-migrator | > upgrade 0450-pdpgroup.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.31175833Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=600.31µs 09:00:51 kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | ssl.truststore.location = null 09:00:51 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.317201494Z level=info msg="Executing migration" id="Add column with_credentials" 09:00:51 kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | ssl.truststore.password = null 09:00:51 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.321524737Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=4.322092ms 09:00:51 kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | ssl.truststore.type = JKS 09:00:51 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.326046533Z level=info msg="Executing migration" id="Add secure json data column" 09:00:51 kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:00:51 policy-db-migrator | 09:00:51 policy-apex-pdp | security.protocol = PLAINTEXT 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.328432558Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.385705ms 09:00:51 kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | 09:00:51 policy-db-migrator | 09:00:51 policy-apex-pdp | security.providers = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.331932385Z level=info msg="Executing migration" id="Update data_source table charset" 09:00:51 kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:58:49.066+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 09:00:51 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 09:00:51 policy-apex-pdp | send.buffer.bytes = 131072 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.331991826Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=59.301µs 09:00:51 kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:58:49.066+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.336121465Z level=info msg="Executing migration" id="Update initial version to 1" 09:00:51 kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:58:49.066+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713949129066 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 09:00:51 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.336328619Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=206.774µs 09:00:51 kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:58:49.066+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-apex-pdp | ssl.cipher.suites = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.34209997Z level=info msg="Executing migration" id="Add read_only data column" 09:00:51 kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:58:49.385+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 09:00:51 policy-db-migrator | 09:00:51 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.346271559Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=4.168839ms 09:00:51 kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 09:00:51 policy-pap | [2024-04-24T08:58:49.525+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.350924717Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 09:00:51 kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-db-migrator | > upgrade 0470-pdp.sql 09:00:51 policy-apex-pdp | ssl.engine.factory.class = null 09:00:51 policy-pap | [2024-04-24T08:58:49.770+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@8bde368, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@5065bdac, org.springframework.security.web.context.SecurityContextHolderFilter@6fc6f68f, org.springframework.security.web.header.HeaderWriterFilter@60b4d934, org.springframework.security.web.authentication.logout.LogoutFilter@441016d6, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@3be369fc, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@30437e9c, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@762f8ff6, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@2e9dcdd3, org.springframework.security.web.access.ExceptionTranslationFilter@2435c6ae, org.springframework.security.web.access.intercept.AuthorizationFilter@4e26040f] 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.351284545Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=359.658µs 09:00:51 kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-apex-pdp | ssl.key.password = null 09:00:51 policy-pap | [2024-04-24T08:58:50.497+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.355871142Z level=info msg="Executing migration" id="Update json_data with nulls" 09:00:51 kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 09:00:51 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 09:00:51 policy-pap | [2024-04-24T08:58:50.590+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.356125407Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=252.305µs 09:00:51 kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-apex-pdp | ssl.keystore.certificate.chain = null 09:00:51 policy-pap | [2024-04-24T08:58:50.607+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.360135273Z level=info msg="Executing migration" id="Add uid column" 09:00:51 kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 policy-apex-pdp | ssl.keystore.key = null 09:00:51 policy-pap | [2024-04-24T08:58:50.623+00:00|INFO|ServiceManager|main] Policy PAP starting 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.362556549Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.419966ms 09:00:51 kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 policy-apex-pdp | ssl.keystore.location = null 09:00:51 policy-pap | [2024-04-24T08:58:50.623+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.36729837Z level=info msg="Executing migration" id="Update uid value" 09:00:51 kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 09:00:51 policy-apex-pdp | ssl.keystore.password = null 09:00:51 policy-pap | [2024-04-24T08:58:50.624+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.367580775Z level=info msg="Migration successfully executed" id="Update uid value" duration=281.905µs 09:00:51 kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-apex-pdp | ssl.keystore.type = JKS 09:00:51 policy-pap | [2024-04-24T08:58:50.624+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.370123504Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 09:00:51 kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 09:00:51 policy-apex-pdp | ssl.protocol = TLSv1.3 09:00:51 policy-pap | [2024-04-24T08:58:50.624+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.371103192Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=979.938µs 09:00:51 kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-apex-pdp | ssl.provider = null 09:00:51 policy-pap | [2024-04-24T08:58:50.625+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.375912184Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 09:00:51 kafka | [2024-04-24 08:58:51,180] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 policy-apex-pdp | ssl.secure.random.implementation = null 09:00:51 policy-pap | [2024-04-24T08:58:50.625+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.377457803Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.545399ms 09:00:51 kafka | [2024-04-24 08:58:51,180] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 09:00:51 policy-pap | [2024-04-24T08:58:50.627+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@716eae1 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.38307253Z level=info msg="Executing migration" id="create api_key table" 09:00:51 kafka | [2024-04-24 08:58:51,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 09:00:51 policy-apex-pdp | ssl.truststore.certificates = null 09:00:51 policy-pap | [2024-04-24T08:58:50.638+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.383927547Z level=info msg="Migration successfully executed" id="create api_key table" duration=852.326µs 09:00:51 kafka | [2024-04-24 08:58:51,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-apex-pdp | ssl.truststore.location = null 09:00:51 policy-pap | [2024-04-24T08:58:50.639+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 09:00:51 policy-pap | allow.auto.create.topics = true 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.388781259Z level=info msg="Executing migration" id="add index api_key.account_id" 09:00:51 policy-apex-pdp | ssl.truststore.password = null 09:00:51 policy-pap | auto.commit.interval.ms = 5000 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 09:00:51 kafka | [2024-04-24 08:58:51,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.389626296Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=844.777µs 09:00:51 policy-apex-pdp | ssl.truststore.type = JKS 09:00:51 policy-pap | auto.include.jmx.reporter = true 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.393093082Z level=info msg="Executing migration" id="add index api_key.key" 09:00:51 policy-apex-pdp | transaction.timeout.ms = 60000 09:00:51 policy-pap | auto.offset.reset = latest 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.393917808Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=824.676µs 09:00:51 policy-apex-pdp | transactional.id = null 09:00:51 policy-pap | bootstrap.servers = [kafka:9092] 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.399680237Z level=info msg="Executing migration" id="add index api_key.account_id_name" 09:00:51 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:00:51 policy-pap | check.crcs = true 09:00:51 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 09:00:51 kafka | [2024-04-24 08:58:51,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.401168496Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.486709ms 09:00:51 policy-apex-pdp | 09:00:51 policy-pap | client.dns.lookup = use_all_dns_ips 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.40558511Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.355+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 09:00:51 policy-pap | client.id = consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 09:00:51 kafka | [2024-04-24 08:58:51,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.407052479Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.465507ms 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.376+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 09:00:51 policy-pap | client.rack = 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.410165357Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.376+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 09:00:51 policy-pap | connections.max.idle.ms = 540000 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.411159446Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=994.149µs 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.377+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713949132376 09:00:51 policy-pap | default.api.timeout.ms = 60000 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.417511507Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.377+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=6f6498b9-feed-4855-a99d-511b9662bd01, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 09:00:51 policy-pap | enable.auto.commit = true 09:00:51 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 09:00:51 kafka | [2024-04-24 08:58:51,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.418892073Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.380086ms 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.377+00:00|INFO|ServiceManager|main] service manager starting set alive 09:00:51 policy-pap | exclude.internal.topics = true 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.423746996Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.378+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 09:00:51 policy-pap | fetch.max.bytes = 52428800 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 09:00:51 kafka | [2024-04-24 08:58:51,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.430708769Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=6.961943ms 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.379+00:00|INFO|ServiceManager|main] service manager starting topic sinks 09:00:51 policy-pap | fetch.max.wait.ms = 500 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.434852638Z level=info msg="Executing migration" id="create api_key table v2" 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.380+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 09:00:51 policy-pap | fetch.min.bytes = 1 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.435754886Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=902.977µs 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.384+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 09:00:51 policy-pap | group.id = c2598a93-7b5f-4e4e-b23a-b864ffd9a18a 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.441101657Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.384+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 09:00:51 policy-pap | group.instance.id = null 09:00:51 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 09:00:51 kafka | [2024-04-24 08:58:51,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.441915823Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=812.876µs 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.384+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 09:00:51 policy-pap | heartbeat.interval.ms = 3000 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,322] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.444812058Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.385+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=6c14929a-34c8-48a0-adf2-d542a07b4ce8, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@607fbe09 09:00:51 policy-pap | interceptor.classes = [] 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 09:00:51 kafka | [2024-04-24 08:58:51,322] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.445595134Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=782.825µs 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.385+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=6c14929a-34c8-48a0-adf2-d542a07b4ce8, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 09:00:51 policy-pap | internal.leave.group.on.close = true 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,322] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.450591219Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.385+00:00|INFO|ServiceManager|main] service manager starting Create REST server 09:00:51 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,322] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.451359533Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=767.904µs 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.410+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 09:00:51 policy-pap | isolation.level = read_uncommitted 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,322] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.454607455Z level=info msg="Executing migration" id="copy api_key v1 to v2" 09:00:51 policy-apex-pdp | [] 09:00:51 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:00:51 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 09:00:51 kafka | [2024-04-24 08:58:51,322] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.455109514Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=502.23µs 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.413+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 09:00:51 policy-pap | max.partition.fetch.bytes = 1048576 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,322] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.459811754Z level=info msg="Executing migration" id="Drop old table api_key_v1" 09:00:51 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c332eed1-33b5-4f1c-8b3b-05ac50842ecd","timestampMs":1713949132390,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup"} 09:00:51 policy-pap | max.poll.interval.ms = 300000 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 09:00:51 kafka | [2024-04-24 08:58:51,323] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.46064783Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=835.606µs 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.624+00:00|INFO|ServiceManager|main] service manager starting Rest Server 09:00:51 policy-pap | max.poll.records = 500 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,323] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.465067264Z level=info msg="Executing migration" id="Update api_key table charset" 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.624+00:00|INFO|ServiceManager|main] service manager starting 09:00:51 policy-pap | metadata.max.age.ms = 300000 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,323] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.465095745Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=29.001µs 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.624+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 09:00:51 policy-pap | metric.reporters = [] 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,323] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.469643422Z level=info msg="Executing migration" id="Add expires to api_key table" 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.624+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5aabbb29{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@72c927f1{/,null,STOPPED}, connector=RestServerParameters@53ab0286{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 09:00:51 policy-pap | metrics.num.samples = 2 09:00:51 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 09:00:51 kafka | [2024-04-24 08:58:51,323] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.47323171Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=3.586048ms 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.634+00:00|INFO|ServiceManager|main] service manager started 09:00:51 policy-pap | metrics.recording.level = INFO 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,323] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.476573764Z level=info msg="Executing migration" id="Add service account foreign key" 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.634+00:00|INFO|ServiceManager|main] service manager started 09:00:51 policy-pap | metrics.sample.window.ms = 30000 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 09:00:51 kafka | [2024-04-24 08:58:51,323] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.480510779Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=3.935874ms 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.634+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 09:00:51 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,323] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.513881525Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 09:00:51 policy-pap | receive.buffer.bytes = 65536 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.635+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5aabbb29{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@72c927f1{/,null,STOPPED}, connector=RestServerParameters@53ab0286{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.514185902Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=308.747µs 09:00:51 policy-pap | reconnect.backoff.max.ms = 1000 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.800+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Cluster ID: FWpz7Mn1RFGDoEChXT3QPg 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.517468644Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 09:00:51 policy-pap | reconnect.backoff.ms = 50 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.800+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: FWpz7Mn1RFGDoEChXT3QPg 09:00:51 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 09:00:51 kafka | [2024-04-24 08:58:51,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.521408979Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.940146ms 09:00:51 policy-pap | request.timeout.ms = 30000 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.802+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.528113106Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 09:00:51 policy-pap | retry.backoff.ms = 100 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.810+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 09:00:51 kafka | [2024-04-24 08:58:51,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.530608724Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.494938ms 09:00:51 policy-pap | sasl.client.callback.handler.class = null 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.819+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] (Re-)joining group 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 policy-pap | sasl.jaas.config = null 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.831+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Request joining group due to: need to re-join with the given member-id: consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2-0953ac9a-4503-441d-8d7e-d642725f8ea2 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.535335985Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 09:00:51 kafka | [2024-04-24 08:58:51,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.831+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.536057349Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=719.953µs 09:00:51 kafka | [2024-04-24 08:58:51,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 policy-apex-pdp | [2024-04-24T08:58:52.831+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] (Re-)joining group 09:00:51 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.541647975Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 09:00:51 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:00:51 kafka | [2024-04-24 08:58:51,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 policy-apex-pdp | [2024-04-24T08:58:53.250+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.542523052Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=874.457µs 09:00:51 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 09:00:51 kafka | [2024-04-24 08:58:51,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 policy-apex-pdp | [2024-04-24T08:58:53.250+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.547271742Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 09:00:51 policy-pap | sasl.kerberos.service.name = null 09:00:51 kafka | [2024-04-24 08:58:51,324] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 policy-apex-pdp | [2024-04-24T08:58:55.837+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Successfully joined group with generation Generation{generationId=1, memberId='consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2-0953ac9a-4503-441d-8d7e-d642725f8ea2', protocol='range'} 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.548642049Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.369846ms 09:00:51 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 09:00:51 kafka | [2024-04-24 08:58:51,325] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 policy-apex-pdp | [2024-04-24T08:58:55.847+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Finished assignment for group at generation 1: {consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2-0953ac9a-4503-441d-8d7e-d642725f8ea2=Assignment(partitions=[policy-pdp-pap-0])} 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.554266346Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 09:00:51 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 09:00:51 kafka | [2024-04-24 08:58:51,325] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 policy-apex-pdp | [2024-04-24T08:58:55.855+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Successfully synced group in generation Generation{generationId=1, memberId='consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2-0953ac9a-4503-441d-8d7e-d642725f8ea2', protocol='range'} 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.55499041Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=726.574µs 09:00:51 policy-pap | sasl.login.callback.handler.class = null 09:00:51 kafka | [2024-04-24 08:58:51,326] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 policy-apex-pdp | [2024-04-24T08:58:55.855+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 09:00:51 policy-db-migrator | > upgrade 0570-toscadatatype.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.55759946Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 09:00:51 policy-pap | sasl.login.class = null 09:00:51 kafka | [2024-04-24 08:58:51,326] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 policy-apex-pdp | [2024-04-24T08:58:55.857+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Adding newly assigned partitions: policy-pdp-pap-0 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.5587086Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.10443ms 09:00:51 policy-pap | sasl.login.connect.timeout.ms = null 09:00:51 kafka | [2024-04-24 08:58:51,326] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 policy-apex-pdp | [2024-04-24T08:58:55.865+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Found no committed offset for partition policy-pdp-pap-0 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.562673106Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 09:00:51 policy-pap | sasl.login.read.timeout.ms = null 09:00:51 kafka | [2024-04-24 08:58:51,326] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:00:51 policy-apex-pdp | [2024-04-24T08:58:55.874+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2, groupId=6c14929a-34c8-48a0-adf2-d542a07b4ce8] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.564368388Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.699642ms 09:00:51 policy-pap | sasl.login.refresh.buffer.seconds = 300 09:00:51 policy-apex-pdp | [2024-04-24T08:58:56.153+00:00|INFO|RequestLog|qtp1863100050-33] 172.17.0.2 - policyadmin [24/Apr/2024:08:58:56 +0000] "GET /metrics HTTP/1.1" 200 10649 "-" "Prometheus/2.51.2" 09:00:51 kafka | [2024-04-24 08:58:51,334] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.571187478Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 09:00:51 policy-pap | sasl.login.refresh.min.period.seconds = 60 09:00:51 policy-apex-pdp | [2024-04-24T08:59:12.385+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 09:00:51 kafka | [2024-04-24 08:58:51,334] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.57128533Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=98.422µs 09:00:51 policy-pap | sasl.login.refresh.window.factor = 0.8 09:00:51 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4e120101-1cee-4165-9b1d-d46c107a0c1e","timestampMs":1713949152385,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup"} 09:00:51 kafka | [2024-04-24 08:58:51,334] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 09:00:51 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.575161984Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 09:00:51 policy-pap | sasl.login.refresh.window.jitter = 0.05 09:00:51 policy-apex-pdp | [2024-04-24T08:59:12.405+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:00:51 kafka | [2024-04-24 08:58:51,334] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.575203915Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=43.991µs 09:00:51 policy-pap | sasl.login.retry.backoff.max.ms = 10000 09:00:51 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4e120101-1cee-4165-9b1d-d46c107a0c1e","timestampMs":1713949152385,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup"} 09:00:51 kafka | [2024-04-24 08:58:51,334] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 09:00:51 policy-pap | sasl.login.retry.backoff.ms = 100 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.579271522Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 09:00:51 kafka | [2024-04-24 08:58:51,335] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 09:00:51 policy-pap | sasl.mechanism = GSSAPI 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.582970753Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=3.700251ms 09:00:51 policy-apex-pdp | [2024-04-24T08:59:12.408+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,335] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 09:00:51 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.585765446Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 09:00:51 policy-apex-pdp | [2024-04-24T08:59:12.535+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,335] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 09:00:51 policy-pap | sasl.oauthbearer.expected.audience = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.589454307Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=3.687901ms 09:00:51 policy-apex-pdp | {"source":"pap-43e719fa-ff69-4964-bc31-d2528becc332","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"77293ae2-da7e-415d-9361-5e79c680736b","timestampMs":1713949152480,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,335] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 09:00:51 policy-pap | sasl.oauthbearer.expected.issuer = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.595740186Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 09:00:51 policy-apex-pdp | [2024-04-24T08:59:12.545+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 09:00:51 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 09:00:51 kafka | [2024-04-24 08:58:51,335] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 09:00:51 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.595804038Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=64.542µs 09:00:51 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a213e342-fe55-4aeb-87b1-3b23ade78ea0","timestampMs":1713949152545,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup"} 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,336] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 09:00:51 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.60012433Z level=info msg="Executing migration" id="create quota table v1" 09:00:51 policy-apex-pdp | [2024-04-24T08:59:12.545+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 09:00:51 kafka | [2024-04-24 08:58:51,336] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 09:00:51 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.601250602Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.128202ms 09:00:51 policy-apex-pdp | [2024-04-24T08:59:12.547+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,336] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 09:00:51 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.605522343Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 09:00:51 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"77293ae2-da7e-415d-9361-5e79c680736b","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"7432afad-c26c-427a-97c6-ce2c56947811","timestampMs":1713949152547,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,336] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 09:00:51 policy-pap | sasl.oauthbearer.scope.claim.name = scope 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.6074185Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.894476ms 09:00:51 policy-apex-pdp | [2024-04-24T08:59:12.558+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,337] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 09:00:51 policy-pap | sasl.oauthbearer.sub.claim.name = sub 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.613669599Z level=info msg="Executing migration" id="Update quota table charset" 09:00:51 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a213e342-fe55-4aeb-87b1-3b23ade78ea0","timestampMs":1713949152545,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup"} 09:00:51 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 09:00:51 kafka | [2024-04-24 08:58:51,337] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 09:00:51 policy-pap | sasl.oauthbearer.token.endpoint.url = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.6137318Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=68.251µs 09:00:51 policy-apex-pdp | [2024-04-24T08:59:12.558+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,337] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 09:00:51 policy-pap | security.protocol = PLAINTEXT 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.620439027Z level=info msg="Executing migration" id="create plugin_setting table" 09:00:51 policy-apex-pdp | [2024-04-24T08:59:12.563+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 09:00:51 kafka | [2024-04-24 08:58:51,337] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 09:00:51 policy-pap | security.providers = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.621758443Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.317135ms 09:00:51 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"77293ae2-da7e-415d-9361-5e79c680736b","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"7432afad-c26c-427a-97c6-ce2c56947811","timestampMs":1713949152547,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,337] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 09:00:51 policy-pap | send.buffer.bytes = 131072 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.625127337Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 09:00:51 policy-apex-pdp | [2024-04-24T08:59:12.563+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 09:00:51 policy-db-migrator | 09:00:51 policy-pap | session.timeout.ms = 45000 09:00:51 kafka | [2024-04-24 08:58:51,337] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.626411082Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.283685ms 09:00:51 policy-apex-pdp | [2024-04-24T08:59:12.573+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:00:51 policy-db-migrator | 09:00:51 policy-pap | socket.connection.setup.timeout.max.ms = 30000 09:00:51 kafka | [2024-04-24 08:58:51,337] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.629663053Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 09:00:51 policy-apex-pdp | {"source":"pap-43e719fa-ff69-4964-bc31-d2528becc332","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"c5968f1a-b7af-452f-bf63-1bacb67aef0f","timestampMs":1713949152481,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:00:51 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 09:00:51 policy-pap | socket.connection.setup.timeout.ms = 10000 09:00:51 kafka | [2024-04-24 08:58:51,337] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.63420832Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.545027ms 09:00:51 policy-apex-pdp | [2024-04-24T08:59:12.576+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | ssl.cipher.suites = null 09:00:51 kafka | [2024-04-24 08:58:51,338] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.639596383Z level=info msg="Executing migration" id="Update plugin_setting table charset" 09:00:51 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"c5968f1a-b7af-452f-bf63-1bacb67aef0f","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"fe7846fa-1b6b-47d0-a2a9-3907eb9b0f7a","timestampMs":1713949152576,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 09:00:51 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:00:51 kafka | [2024-04-24 08:58:51,338] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.639619994Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=24.311µs 09:00:51 policy-apex-pdp | [2024-04-24T08:59:12.584+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | ssl.endpoint.identification.algorithm = https 09:00:51 kafka | [2024-04-24 08:58:51,338] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.641786965Z level=info msg="Executing migration" id="create session table" 09:00:51 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"c5968f1a-b7af-452f-bf63-1bacb67aef0f","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"fe7846fa-1b6b-47d0-a2a9-3907eb9b0f7a","timestampMs":1713949152576,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:00:51 policy-db-migrator | 09:00:51 policy-pap | ssl.engine.factory.class = null 09:00:51 kafka | [2024-04-24 08:58:51,338] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.642719432Z level=info msg="Migration successfully executed" id="create session table" duration=933.767µs 09:00:51 policy-apex-pdp | [2024-04-24T08:59:12.584+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 09:00:51 policy-db-migrator | 09:00:51 policy-pap | ssl.key.password = null 09:00:51 kafka | [2024-04-24 08:58:51,338] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.677629348Z level=info msg="Executing migration" id="Drop old table playlist table" 09:00:51 policy-apex-pdp | [2024-04-24T08:59:12.669+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:00:51 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 09:00:51 policy-pap | ssl.keymanager.algorithm = SunX509 09:00:51 kafka | [2024-04-24 08:58:51,338] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.677910193Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=280.165µs 09:00:51 policy-apex-pdp | {"source":"pap-43e719fa-ff69-4964-bc31-d2528becc332","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"e1bfc2a1-b68d-4b0d-960e-f7897689b4f6","timestampMs":1713949152597,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.682884708Z level=info msg="Executing migration" id="Drop old table playlist_item table" 09:00:51 policy-apex-pdp | [2024-04-24T08:59:12.670+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 09:00:51 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"e1bfc2a1-b68d-4b0d-960e-f7897689b4f6","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"cd1194fc-c9f5-401f-9ec0-7e330c6971e2","timestampMs":1713949152670,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:00:51 kafka | [2024-04-24 08:58:51,338] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.683119273Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=234.355µs 09:00:51 policy-pap | ssl.keystore.certificate.chain = null 09:00:51 policy-apex-pdp | [2024-04-24T08:59:12.681+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:00:51 kafka | [2024-04-24 08:58:51,338] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.688054327Z level=info msg="Executing migration" id="create playlist table v2" 09:00:51 policy-pap | ssl.keystore.key = null 09:00:51 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"e1bfc2a1-b68d-4b0d-960e-f7897689b4f6","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"cd1194fc-c9f5-401f-9ec0-7e330c6971e2","timestampMs":1713949152670,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:00:51 policy-apex-pdp | [2024-04-24T08:59:12.682+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.6892225Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.164783ms 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 09:00:51 policy-pap | ssl.keystore.location = null 09:00:51 kafka | [2024-04-24 08:58:51,339] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 09:00:51 policy-apex-pdp | [2024-04-24T08:59:56.081+00:00|INFO|RequestLog|qtp1863100050-28] 172.17.0.2 - policyadmin [24/Apr/2024:08:59:56 +0000] "GET /metrics HTTP/1.1" 200 10652 "-" "Prometheus/2.51.2" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.693226685Z level=info msg="Executing migration" id="create playlist item table v2" 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | ssl.keystore.password = null 09:00:51 kafka | [2024-04-24 08:58:51,339] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.694460579Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.235944ms 09:00:51 policy-db-migrator | 09:00:51 policy-pap | ssl.keystore.type = JKS 09:00:51 kafka | [2024-04-24 08:58:51,339] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.701749988Z level=info msg="Executing migration" id="Update playlist table charset" 09:00:51 policy-db-migrator | 09:00:51 policy-pap | ssl.protocol = TLSv1.3 09:00:51 kafka | [2024-04-24 08:58:51,339] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.701773058Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=25.27µs 09:00:51 policy-db-migrator | > upgrade 0630-toscanodetype.sql 09:00:51 kafka | [2024-04-24 08:58:51,339] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 09:00:51 kafka | [2024-04-24 08:58:51,339] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 09:00:51 policy-pap | ssl.provider = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.705322427Z level=info msg="Executing migration" id="Update playlist_item table charset" 09:00:51 kafka | [2024-04-24 08:58:51,339] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 09:00:51 policy-pap | ssl.secure.random.implementation = null 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.705343767Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=22.41µs 09:00:51 kafka | [2024-04-24 08:58:51,339] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 09:00:51 policy-pap | ssl.trustmanager.algorithm = PKIX 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.710397283Z level=info msg="Executing migration" id="Add playlist column created_at" 09:00:51 kafka | [2024-04-24 08:58:51,340] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 09:00:51 policy-pap | ssl.truststore.certificates = null 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.714701625Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.305602ms 09:00:51 kafka | [2024-04-24 08:58:51,340] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 09:00:51 policy-pap | ssl.truststore.location = null 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.719367485Z level=info msg="Executing migration" id="Add playlist column updated_at" 09:00:51 kafka | [2024-04-24 08:58:51,340] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 09:00:51 policy-pap | ssl.truststore.password = null 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.722534694Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.167169ms 09:00:51 kafka | [2024-04-24 08:58:51,340] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 09:00:51 policy-pap | ssl.truststore.type = JKS 09:00:51 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.729050838Z level=info msg="Executing migration" id="drop preferences table v2" 09:00:51 kafka | [2024-04-24 08:58:51,340] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 09:00:51 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.729131771Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=81.273µs 09:00:51 kafka | [2024-04-24 08:58:51,340] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 09:00:51 policy-pap | 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.733951112Z level=info msg="Executing migration" id="drop preferences table v3" 09:00:51 kafka | [2024-04-24 08:58:51,340] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:58:50.645+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.734182457Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=230.595µs 09:00:51 kafka | [2024-04-24 08:58:51,340] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:58:50.645+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.73857675Z level=info msg="Executing migration" id="create preferences table v3" 09:00:51 kafka | [2024-04-24 08:58:51,341] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:58:50.645+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713949130645 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.740174171Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.596671ms 09:00:51 kafka | [2024-04-24 08:58:51,341] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:58:50.645+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Subscribed to topic(s): policy-pdp-pap 09:00:51 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.746604024Z level=info msg="Executing migration" id="Update preferences table charset" 09:00:51 kafka | [2024-04-24 08:58:51,341] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:58:50.646+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.746641565Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=36.961µs 09:00:51 kafka | [2024-04-24 08:58:51,341] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:58:50.646+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=ecdaf812-364f-4159-9e55-85c348169a99, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@6f651ac 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.751727131Z level=info msg="Executing migration" id="Add column team_id in preferences" 09:00:51 kafka | [2024-04-24 08:58:51,341] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:58:50.646+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=ecdaf812-364f-4159-9e55-85c348169a99, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.759066221Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=7.33334ms 09:00:51 kafka | [2024-04-24 08:58:51,344] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:58:50.646+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.764498905Z level=info msg="Executing migration" id="Update team_id column values in preferences" 09:00:51 kafka | [2024-04-24 08:58:51,347] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 09:00:51 policy-pap | allow.auto.create.topics = true 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.764679928Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=179.693µs 09:00:51 kafka | [2024-04-24 08:58:51,349] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-pap | auto.commit.interval.ms = 5000 09:00:51 policy-db-migrator | > upgrade 0660-toscaparameter.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.769133004Z level=info msg="Executing migration" id="Add column week_start in preferences" 09:00:51 kafka | [2024-04-24 08:58:51,350] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-pap | auto.include.jmx.reporter = true 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.772541469Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.408505ms 09:00:51 kafka | [2024-04-24 08:58:51,350] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-pap | auto.offset.reset = latest 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.776574145Z level=info msg="Executing migration" id="Add column preferences.json_data" 09:00:51 kafka | [2024-04-24 08:58:51,350] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-pap | bootstrap.servers = [kafka:9092] 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.77998427Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.409445ms 09:00:51 kafka | [2024-04-24 08:58:51,350] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-pap | check.crcs = true 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.997242394Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 09:00:51 kafka | [2024-04-24 08:58:51,350] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-pap | client.dns.lookup = use_all_dns_ips 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:12.997378246Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=144.012µs 09:00:51 kafka | [2024-04-24 08:58:51,350] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-pap | client.id = consumer-policy-pap-4 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.093357319Z level=info msg="Executing migration" id="Add preferences index org_id" 09:00:51 kafka | [2024-04-24 08:58:51,350] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-pap | client.rack = 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.094287826Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=933.018µs 09:00:51 kafka | [2024-04-24 08:58:51,350] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-pap | connections.max.idle.ms = 540000 09:00:51 policy-db-migrator | > upgrade 0670-toscapolicies.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.104027974Z level=info msg="Executing migration" id="Add preferences index user_id" 09:00:51 kafka | [2024-04-24 08:58:51,350] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-pap | default.api.timeout.ms = 60000 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.105482452Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.459118ms 09:00:51 kafka | [2024-04-24 08:58:51,351] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-pap | enable.auto.commit = true 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.114619839Z level=info msg="Executing migration" id="create alert table v1" 09:00:51 kafka | [2024-04-24 08:58:51,351] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-pap | exclude.internal.topics = true 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.116298552Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.678993ms 09:00:51 kafka | [2024-04-24 08:58:51,351] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-pap | fetch.max.bytes = 52428800 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.128468676Z level=info msg="Executing migration" id="add index alert org_id & id " 09:00:51 kafka | [2024-04-24 08:58:51,351] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-pap | fetch.max.wait.ms = 500 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.129822823Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.351657ms 09:00:51 policy-pap | fetch.min.bytes = 1 09:00:51 kafka | [2024-04-24 08:58:51,351] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.157724052Z level=info msg="Executing migration" id="add index alert state" 09:00:51 policy-pap | group.id = policy-pap 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.159011617Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.286655ms 09:00:51 policy-pap | group.instance.id = null 09:00:51 kafka | [2024-04-24 08:58:51,351] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.179790857Z level=info msg="Executing migration" id="add index alert dashboard_id" 09:00:51 policy-pap | heartbeat.interval.ms = 3000 09:00:51 kafka | [2024-04-24 08:58:51,351] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.181377648Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.585911ms 09:00:51 policy-pap | interceptor.classes = [] 09:00:51 kafka | [2024-04-24 08:58:51,351] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.192767478Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 09:00:51 policy-pap | internal.leave.group.on.close = true 09:00:51 kafka | [2024-04-24 08:58:51,351] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.195129223Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=2.364555ms 09:00:51 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 09:00:51 kafka | [2024-04-24 08:58:51,352] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | > upgrade 0690-toscapolicy.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.205022904Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 09:00:51 policy-pap | isolation.level = read_uncommitted 09:00:51 kafka | [2024-04-24 08:58:51,352] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.205951943Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=931.019µs 09:00:51 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:00:51 kafka | [2024-04-24 08:58:51,352] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.262855011Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 09:00:51 policy-pap | max.partition.fetch.bytes = 1048576 09:00:51 kafka | [2024-04-24 08:58:51,352] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.264178717Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.323886ms 09:00:51 policy-pap | max.poll.interval.ms = 300000 09:00:51 kafka | [2024-04-24 08:58:51,352] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.292983973Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 09:00:51 policy-pap | max.poll.records = 500 09:00:51 kafka | [2024-04-24 08:58:51,352] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.299889637Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=6.904274ms 09:00:51 policy-pap | metadata.max.age.ms = 300000 09:00:51 kafka | [2024-04-24 08:58:51,352] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.343723032Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 09:00:51 policy-pap | metric.reporters = [] 09:00:51 kafka | [2024-04-24 08:58:51,352] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.345058388Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=1.337886ms 09:00:51 policy-pap | metrics.num.samples = 2 09:00:51 kafka | [2024-04-24 08:58:51,352] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.352229437Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 09:00:51 policy-pap | metrics.recording.level = INFO 09:00:51 kafka | [2024-04-24 08:58:51,353] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.353669985Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.440318ms 09:00:51 policy-pap | metrics.sample.window.ms = 30000 09:00:51 kafka | [2024-04-24 08:58:51,353] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.357723633Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 09:00:51 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 09:00:51 kafka | [2024-04-24 08:58:51,353] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.358185532Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=459.139µs 09:00:51 policy-pap | receive.buffer.bytes = 65536 09:00:51 kafka | [2024-04-24 08:58:51,353] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.361940454Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 09:00:51 policy-pap | reconnect.backoff.max.ms = 1000 09:00:51 kafka | [2024-04-24 08:58:51,353] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.362641048Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=700.184µs 09:00:51 policy-pap | reconnect.backoff.ms = 50 09:00:51 kafka | [2024-04-24 08:58:51,353] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.369178595Z level=info msg="Executing migration" id="create alert_notification table v1" 09:00:51 policy-pap | request.timeout.ms = 30000 09:00:51 kafka | [2024-04-24 08:58:51,353] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.369929529Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=750.834µs 09:00:51 policy-pap | retry.backoff.ms = 100 09:00:51 kafka | [2024-04-24 08:58:51,353] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.374583149Z level=info msg="Executing migration" id="Add column is_default" 09:00:51 policy-pap | sasl.client.callback.handler.class = null 09:00:51 kafka | [2024-04-24 08:58:51,353] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.38032529Z level=info msg="Migration successfully executed" id="Add column is_default" duration=5.742111ms 09:00:51 policy-pap | sasl.jaas.config = null 09:00:51 kafka | [2024-04-24 08:58:51,354] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.384300006Z level=info msg="Executing migration" id="Add column frequency" 09:00:51 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:00:51 kafka | [2024-04-24 08:58:51,354] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.387711882Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.411056ms 09:00:51 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 09:00:51 kafka | [2024-04-24 08:58:51,354] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.393871371Z level=info msg="Executing migration" id="Add column send_reminder" 09:00:51 policy-pap | sasl.kerberos.service.name = null 09:00:51 kafka | [2024-04-24 08:58:51,355] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.397273037Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.401166ms 09:00:51 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 09:00:51 kafka | [2024-04-24 08:58:51,355] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.401504619Z level=info msg="Executing migration" id="Add column disable_resolve_message" 09:00:51 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 09:00:51 kafka | [2024-04-24 08:58:51,355] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.405282291Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.779332ms 09:00:51 policy-pap | sasl.login.callback.handler.class = null 09:00:51 kafka | [2024-04-24 08:58:51,355] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-db-migrator | > upgrade 0730-toscaproperty.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.4082787Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 09:00:51 policy-pap | sasl.login.class = null 09:00:51 kafka | [2024-04-24 08:58:51,355] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.409080435Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=801.355µs 09:00:51 policy-pap | sasl.login.connect.timeout.ms = null 09:00:51 kafka | [2024-04-24 08:58:51,355] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.414123932Z level=info msg="Executing migration" id="Update alert table charset" 09:00:51 policy-pap | sasl.login.read.timeout.ms = null 09:00:51 kafka | [2024-04-24 08:58:51,356] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.414146063Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=22.291µs 09:00:51 policy-pap | sasl.login.refresh.buffer.seconds = 300 09:00:51 kafka | [2024-04-24 08:58:51,356] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.419616628Z level=info msg="Executing migration" id="Update alert_notification table charset" 09:00:51 kafka | [2024-04-24 08:58:51,356] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.419653149Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=37.251µs 09:00:51 policy-pap | sasl.login.refresh.min.period.seconds = 60 09:00:51 kafka | [2024-04-24 08:58:51,356] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | sasl.login.refresh.window.factor = 0.8 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 09:00:51 kafka | [2024-04-24 08:58:51,356] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-pap | sasl.login.refresh.window.jitter = 0.05 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,356] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.424075324Z level=info msg="Executing migration" id="create notification_journal table v1" 09:00:51 policy-pap | sasl.login.retry.backoff.max.ms = 10000 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,356] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.425226757Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.151383ms 09:00:51 policy-pap | sasl.login.retry.backoff.ms = 100 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,355] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.428783655Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 09:00:51 policy-pap | sasl.mechanism = GSSAPI 09:00:51 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 09:00:51 kafka | [2024-04-24 08:58:51,358] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.430302865Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.518769ms 09:00:51 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,358] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.438620915Z level=info msg="Executing migration" id="drop alert_notification_journal" 09:00:51 policy-pap | sasl.oauthbearer.expected.audience = null 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 09:00:51 kafka | [2024-04-24 08:58:51,358] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-pap | sasl.oauthbearer.expected.issuer = null 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.439288018Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=667.253µs 09:00:51 kafka | [2024-04-24 08:58:51,358] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.443281256Z level=info msg="Executing migration" id="create alert_notification_state table v1" 09:00:51 kafka | [2024-04-24 08:58:51,358] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.444473618Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.192052ms 09:00:51 kafka | [2024-04-24 08:58:51,358] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:00:51 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.449026526Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 09:00:51 kafka | [2024-04-24 08:58:51,358] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.450342942Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.317936ms 09:00:51 kafka | [2024-04-24 08:58:51,358] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-pap | sasl.oauthbearer.scope.claim.name = scope 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.457208365Z level=info msg="Executing migration" id="Add for to alert table" 09:00:51 kafka | [2024-04-24 08:58:51,358] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-pap | sasl.oauthbearer.sub.claim.name = sub 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.462913345Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=5.70406ms 09:00:51 kafka | [2024-04-24 08:58:51,358] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-pap | sasl.oauthbearer.token.endpoint.url = null 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.466153627Z level=info msg="Executing migration" id="Add column uid in alert_notification" 09:00:51 kafka | [2024-04-24 08:58:51,358] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-pap | security.protocol = PLAINTEXT 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.470053862Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.900195ms 09:00:51 kafka | [2024-04-24 08:58:51,358] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-pap | security.providers = null 09:00:51 policy-db-migrator | > upgrade 0770-toscarequirement.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.47405959Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 09:00:51 kafka | [2024-04-24 08:58:51,358] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-pap | send.buffer.bytes = 131072 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.474224303Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=164.573µs 09:00:51 kafka | [2024-04-24 08:58:51,358] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-pap | session.timeout.ms = 45000 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.481118106Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 09:00:51 kafka | [2024-04-24 08:58:51,358] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-pap | socket.connection.setup.timeout.max.ms = 30000 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.481980913Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=863.707µs 09:00:51 kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-pap | socket.connection.setup.timeout.ms = 10000 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.48651428Z level=info msg="Executing migration" id="Remove unique index org_id_name" 09:00:51 kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-pap | ssl.cipher.suites = null 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.48805126Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.53705ms 09:00:51 kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:00:51 policy-db-migrator | > upgrade 0780-toscarequirements.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.49167283Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 09:00:51 kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-pap | ssl.endpoint.identification.algorithm = https 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.496144506Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=4.471686ms 09:00:51 policy-pap | ssl.engine.factory.class = null 09:00:51 kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.501160043Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 09:00:51 policy-pap | ssl.key.password = null 09:00:51 kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.501216614Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=56.702µs 09:00:51 policy-pap | ssl.keymanager.algorithm = SunX509 09:00:51 kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.50515778Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 09:00:51 policy-pap | ssl.keystore.certificate.chain = null 09:00:51 kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.505954925Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=796.695µs 09:00:51 policy-pap | ssl.keystore.key = null 09:00:51 kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.510049584Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 09:00:51 policy-pap | ssl.keystore.location = null 09:00:51 kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.510914641Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=864.817µs 09:00:51 policy-pap | ssl.keystore.password = null 09:00:51 kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.517201332Z level=info msg="Executing migration" id="Drop old annotation table v4" 09:00:51 policy-pap | ssl.keystore.type = JKS 09:00:51 kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.517326385Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=126.083µs 09:00:51 policy-pap | ssl.protocol = TLSv1.3 09:00:51 kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.521314822Z level=info msg="Executing migration" id="create annotation table v5" 09:00:51 policy-pap | ssl.provider = null 09:00:51 kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.522188249Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=873.387µs 09:00:51 policy-pap | ssl.secure.random.implementation = null 09:00:51 kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.526358359Z level=info msg="Executing migration" id="add index annotation 0 v3" 09:00:51 policy-pap | ssl.trustmanager.algorithm = PKIX 09:00:51 kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.527293808Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=933.959µs 09:00:51 policy-pap | ssl.truststore.certificates = null 09:00:51 kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.532883906Z level=info msg="Executing migration" id="add index annotation 1 v3" 09:00:51 policy-pap | ssl.truststore.location = null 09:00:51 kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.534206391Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.317475ms 09:00:51 policy-pap | ssl.truststore.password = null 09:00:51 kafka | [2024-04-24 08:58:51,358] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.53881206Z level=info msg="Executing migration" id="add index annotation 2 v3" 09:00:51 policy-pap | ssl.truststore.type = JKS 09:00:51 kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.539606905Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=792.865µs 09:00:51 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:00:51 kafka | [2024-04-24 08:58:51,359] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.543787006Z level=info msg="Executing migration" id="add index annotation 3 v3" 09:00:51 policy-pap | 09:00:51 kafka | [2024-04-24 08:58:51,360] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:58:50.651+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 09:00:51 kafka | [2024-04-24 08:58:51,360] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.545155822Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.366766ms 09:00:51 policy-pap | [2024-04-24T08:58:50.651+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 09:00:51 kafka | [2024-04-24 08:58:51,360] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.551545265Z level=info msg="Executing migration" id="add index annotation 4 v3" 09:00:51 policy-pap | [2024-04-24T08:58:50.651+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713949130651 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 09:00:51 kafka | [2024-04-24 08:58:51,360] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.552982783Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.437068ms 09:00:51 policy-pap | [2024-04-24T08:58:50.651+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.556736276Z level=info msg="Executing migration" id="Update annotation table charset" 09:00:51 kafka | [2024-04-24 08:58:51,360] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:58:50.652+00:00|INFO|ServiceManager|main] Policy PAP starting topics 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.556761877Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=25.971µs 09:00:51 kafka | [2024-04-24 08:58:51,359] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:58:50.652+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=ecdaf812-364f-4159-9e55-85c348169a99, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.562772442Z level=info msg="Executing migration" id="Add column region_id to annotation table" 09:00:51 kafka | [2024-04-24 08:58:51,360] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:58:50.652+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 09:00:51 policy-db-migrator | > upgrade 0820-toscatrigger.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.56888132Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=6.107248ms 09:00:51 kafka | [2024-04-24 08:58:51,360] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:58:50.652+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=93375c45-af5c-44c3-a127-0d1a90ab70ea, alive=false, publisher=null]]: starting 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.573667463Z level=info msg="Executing migration" id="Drop category_id index" 09:00:51 kafka | [2024-04-24 08:58:51,360] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:58:50.666+00:00|INFO|ProducerConfig|main] ProducerConfig values: 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.574804965Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.135592ms 09:00:51 kafka | [2024-04-24 08:58:51,360] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:00:51 policy-pap | acks = -1 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.578814092Z level=info msg="Executing migration" id="Add column tags to annotation table" 09:00:51 kafka | [2024-04-24 08:58:51,360] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 policy-pap | auto.include.jmx.reporter = true 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.58492776Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=6.111818ms 09:00:51 kafka | [2024-04-24 08:58:51,361] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.588971329Z level=info msg="Executing migration" id="Create annotation_tag table v2" 09:00:51 policy-pap | batch.size = 16384 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,361] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.589760454Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=788.815µs 09:00:51 policy-pap | bootstrap.servers = [kafka:9092] 09:00:51 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 09:00:51 kafka | [2024-04-24 08:58:51,361] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.596180918Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 09:00:51 policy-pap | buffer.memory = 33554432 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.597115905Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=931.187µs 09:00:51 policy-pap | client.dns.lookup = use_all_dns_ips 09:00:51 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 09:00:51 kafka | [2024-04-24 08:58:51,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.601496381Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 09:00:51 policy-pap | client.id = producer-1 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.602776605Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.280743ms 09:00:51 policy-pap | compression.type = none 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 09:00:51 policy-pap | connections.max.idle.ms = 540000 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.608842492Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 09:00:51 policy-pap | delivery.timeout.ms = 120000 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.624119747Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=15.282835ms 09:00:51 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 09:00:51 kafka | [2024-04-24 08:58:51,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 09:00:51 policy-pap | enable.idempotence = true 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.628457181Z level=info msg="Executing migration" id="Create annotation_tag table v3" 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 09:00:51 policy-pap | interceptor.classes = [] 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.629014842Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=557.681µs 09:00:51 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 09:00:51 kafka | [2024-04-24 08:58:51,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 09:00:51 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.633576839Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 09:00:51 policy-pap | linger.ms = 0 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.63463645Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.057771ms 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 09:00:51 policy-pap | max.block.ms = 60000 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.63878944Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 09:00:51 policy-pap | max.in.flight.requests.per.connection = 5 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.63925096Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=461.34µs 09:00:51 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 09:00:51 kafka | [2024-04-24 08:58:51,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 09:00:51 policy-pap | max.request.size = 1048576 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.644047812Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 09:00:51 policy-pap | metadata.max.age.ms = 300000 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.644564952Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=516.741µs 09:00:51 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 09:00:51 policy-pap | metadata.max.idle.ms = 300000 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.650323483Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 09:00:51 policy-pap | metric.reporters = [] 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.650621849Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=299.356µs 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 09:00:51 policy-pap | metrics.num.samples = 2 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.65688418Z level=info msg="Executing migration" id="Add created time to annotation table" 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 09:00:51 policy-pap | metrics.recording.level = INFO 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.663363525Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=6.475695ms 09:00:51 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 09:00:51 policy-pap | metrics.sample.window.ms = 30000 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.668484673Z level=info msg="Executing migration" id="Add updated time to annotation table" 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 09:00:51 policy-pap | partitioner.adaptive.partitioning.enable = true 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.6724248Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.940767ms 09:00:51 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 09:00:51 policy-pap | partitioner.availability.timeout.ms = 0 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.675397407Z level=info msg="Executing migration" id="Add index for created in annotation table" 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 09:00:51 policy-pap | partitioner.class = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.676411157Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.011ms 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 09:00:51 policy-pap | partitioner.ignore.keys = false 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.681662698Z level=info msg="Executing migration" id="Add index for updated in annotation table" 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 09:00:51 policy-pap | receive.buffer.bytes = 32768 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.682996964Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.333866ms 09:00:51 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 09:00:51 policy-pap | reconnect.backoff.max.ms = 1000 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.686366159Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | reconnect.backoff.ms = 50 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.686690775Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=324.336µs 09:00:51 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 09:00:51 policy-pap | request.timeout.ms = 30000 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.692559819Z level=info msg="Executing migration" id="Add epoch_end column" 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | retries = 2147483647 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.696707729Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.14698ms 09:00:51 policy-db-migrator | 09:00:51 policy-pap | retry.backoff.ms = 100 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.75945845Z level=info msg="Executing migration" id="Add index for epoch_end" 09:00:51 policy-db-migrator | 09:00:51 policy-pap | sasl.client.callback.handler.class = null 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 09:00:51 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 09:00:51 policy-pap | sasl.jaas.config = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.761250085Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.791795ms 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.770169598Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 09:00:51 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 09:00:51 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.770438493Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=269.094µs 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.779928256Z level=info msg="Executing migration" id="Move region to single row" 09:00:51 policy-db-migrator | 09:00:51 policy-pap | sasl.kerberos.service.name = null 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.780459576Z level=info msg="Migration successfully executed" id="Move region to single row" duration=535.551µs 09:00:51 policy-db-migrator | 09:00:51 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.788245156Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 09:00:51 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 09:00:51 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.797910863Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=9.664467ms 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | sasl.login.callback.handler.class = null 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.837173971Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 09:00:51 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 09:00:51 policy-pap | sasl.login.class = null 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.838535167Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.363556ms 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | sasl.login.connect.timeout.ms = null 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.844386011Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 09:00:51 policy-db-migrator | 09:00:51 policy-pap | sasl.login.read.timeout.ms = null 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.845786088Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.399967ms 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.8500702Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 09:00:51 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 09:00:51 policy-pap | sasl.login.refresh.buffer.seconds = 300 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.851484618Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.414718ms 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | sasl.login.refresh.min.period.seconds = 60 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.85781549Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 09:00:51 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 09:00:51 policy-pap | sasl.login.refresh.window.factor = 0.8 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.859681885Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.864805ms 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | sasl.login.refresh.window.jitter = 0.05 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.862889548Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 09:00:51 policy-db-migrator | 09:00:51 policy-pap | sasl.login.retry.backoff.max.ms = 10000 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.864245924Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.356956ms 09:00:51 policy-db-migrator | 09:00:51 policy-pap | sasl.login.retry.backoff.ms = 100 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.877376078Z level=info msg="Executing migration" id="Increase tags column to length 4096" 09:00:51 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 09:00:51 policy-pap | sasl.mechanism = GSSAPI 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.87750599Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=131.442µs 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.884562546Z level=info msg="Executing migration" id="create test_data table" 09:00:51 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 09:00:51 policy-pap | sasl.oauthbearer.expected.audience = null 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.885862641Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.302775ms 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | sasl.oauthbearer.expected.issuer = null 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.891659833Z level=info msg="Executing migration" id="create dashboard_version table v1" 09:00:51 policy-db-migrator | 09:00:51 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:00:51 kafka | [2024-04-24 08:58:51,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.892348756Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=688.943µs 09:00:51 policy-db-migrator | 09:00:51 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:00:51 kafka | [2024-04-24 08:58:51,398] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.895864474Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 09:00:51 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 09:00:51 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:00:51 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 09:00:51 kafka | [2024-04-24 08:58:51,398] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.896647479Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=782.715µs 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | sasl.oauthbearer.scope.claim.name = scope 09:00:51 kafka | [2024-04-24 08:58:51,437] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.901570004Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 09:00:51 policy-pap | sasl.oauthbearer.sub.claim.name = sub 09:00:51 kafka | [2024-04-24 08:58:51,450] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.902485453Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=914.839µs 09:00:51 policy-pap | sasl.oauthbearer.token.endpoint.url = null 09:00:51 kafka | [2024-04-24 08:58:51,452] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.906369147Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 09:00:51 policy-pap | security.protocol = PLAINTEXT 09:00:51 kafka | [2024-04-24 08:58:51,453] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.906640552Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=271.445µs 09:00:51 policy-pap | security.providers = null 09:00:51 kafka | [2024-04-24 08:58:51,455] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.909480058Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 09:00:51 policy-pap | send.buffer.bytes = 131072 09:00:51 kafka | [2024-04-24 08:58:51,925] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.910034608Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=554.37µs 09:00:51 policy-pap | socket.connection.setup.timeout.max.ms = 30000 09:00:51 kafka | [2024-04-24 08:58:51,925] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.912689489Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 09:00:51 policy-pap | socket.connection.setup.timeout.ms = 10000 09:00:51 kafka | [2024-04-24 08:58:51,926] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.912842382Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=151.233µs 09:00:51 policy-pap | ssl.cipher.suites = null 09:00:51 kafka | [2024-04-24 08:58:51,926] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.921823875Z level=info msg="Executing migration" id="create team table" 09:00:51 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:00:51 kafka | [2024-04-24 08:58:51,926] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.922935987Z level=info msg="Migration successfully executed" id="create team table" duration=1.110582ms 09:00:51 policy-pap | ssl.endpoint.identification.algorithm = https 09:00:51 kafka | [2024-04-24 08:58:51,933] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.928503345Z level=info msg="Executing migration" id="add index team.org_id" 09:00:51 policy-pap | ssl.engine.factory.class = null 09:00:51 kafka | [2024-04-24 08:58:51,933] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.930446243Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.942057ms 09:00:51 policy-pap | ssl.key.password = null 09:00:51 kafka | [2024-04-24 08:58:51,933] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.934968589Z level=info msg="Executing migration" id="add unique index team_org_id_name" 09:00:51 policy-pap | ssl.keymanager.algorithm = SunX509 09:00:51 kafka | [2024-04-24 08:58:51,933] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.935979779Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.01095ms 09:00:51 policy-pap | ssl.keystore.certificate.chain = null 09:00:51 kafka | [2024-04-24 08:58:51,934] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.940416815Z level=info msg="Executing migration" id="Add column uid in team" 09:00:51 policy-pap | ssl.keystore.key = null 09:00:51 kafka | [2024-04-24 08:58:51,939] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.946224007Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=5.807622ms 09:00:51 policy-pap | ssl.keystore.location = null 09:00:51 kafka | [2024-04-24 08:58:51,940] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.949094052Z level=info msg="Executing migration" id="Update uid column values in team" 09:00:51 policy-pap | ssl.keystore.password = null 09:00:51 kafka | [2024-04-24 08:58:51,940] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.949239225Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=144.823µs 09:00:51 policy-pap | ssl.keystore.type = JKS 09:00:51 kafka | [2024-04-24 08:58:51,940] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.952007588Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 09:00:51 policy-pap | ssl.protocol = TLSv1.3 09:00:51 kafka | [2024-04-24 08:58:51,940] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.952645481Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=637.523µs 09:00:51 policy-pap | ssl.provider = null 09:00:51 kafka | [2024-04-24 08:58:51,953] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.959557684Z level=info msg="Executing migration" id="create team member table" 09:00:51 policy-pap | ssl.secure.random.implementation = null 09:00:51 kafka | [2024-04-24 08:58:51,954] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.960479782Z level=info msg="Migration successfully executed" id="create team member table" duration=919.608µs 09:00:51 policy-pap | ssl.trustmanager.algorithm = PKIX 09:00:51 kafka | [2024-04-24 08:58:51,954] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.963319596Z level=info msg="Executing migration" id="add index team_member.org_id" 09:00:51 policy-pap | ssl.truststore.certificates = null 09:00:51 kafka | [2024-04-24 08:58:51,954] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.964343197Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.021181ms 09:00:51 policy-pap | ssl.truststore.location = null 09:00:51 kafka | [2024-04-24 08:58:51,954] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.967168191Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 09:00:51 policy-pap | ssl.truststore.password = null 09:00:51 kafka | [2024-04-24 08:58:51,963] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.968226341Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.05751ms 09:00:51 policy-pap | ssl.truststore.type = JKS 09:00:51 kafka | [2024-04-24 08:58:51,964] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.971082037Z level=info msg="Executing migration" id="add index team_member.team_id" 09:00:51 policy-pap | transaction.timeout.ms = 60000 09:00:51 kafka | [2024-04-24 08:58:51,964] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.972063766Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=982.009µs 09:00:51 policy-pap | transactional.id = null 09:00:51 kafka | [2024-04-24 08:58:51,964] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.976362819Z level=info msg="Executing migration" id="Add column email to team table" 09:00:51 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:00:51 kafka | [2024-04-24 08:58:51,965] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.980354036Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=3.989957ms 09:00:51 policy-pap | 09:00:51 kafka | [2024-04-24 08:58:51,978] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.983335233Z level=info msg="Executing migration" id="Add column external to team_member table" 09:00:51 policy-pap | [2024-04-24T08:58:50.675+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 09:00:51 kafka | [2024-04-24 08:58:51,979] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.987237209Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.900756ms 09:00:51 policy-pap | [2024-04-24T08:58:50.688+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 09:00:51 kafka | [2024-04-24 08:58:51,979] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.990010533Z level=info msg="Executing migration" id="Add column permission to team_member table" 09:00:51 policy-pap | [2024-04-24T08:58:50.688+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 09:00:51 kafka | [2024-04-24 08:58:51,979] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.995145502Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=5.134428ms 09:00:51 policy-pap | [2024-04-24T08:58:50.688+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713949130688 09:00:51 kafka | [2024-04-24 08:58:51,979] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:13.999836942Z level=info msg="Executing migration" id="create dashboard acl table" 09:00:51 policy-pap | [2024-04-24T08:58:50.688+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=93375c45-af5c-44c3-a127-0d1a90ab70ea, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 09:00:51 kafka | [2024-04-24 08:58:51,989] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.000956873Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.118931ms 09:00:51 policy-pap | [2024-04-24T08:58:50.688+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=0583e7e0-8980-4e61-8167-9e42f04d3bdd, alive=false, publisher=null]]: starting 09:00:51 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.003928321Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 09:00:51 kafka | [2024-04-24 08:58:51,990] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-pap | [2024-04-24T08:58:50.689+00:00|INFO|ProducerConfig|main] ProducerConfig values: 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.005055683Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.126592ms 09:00:51 kafka | [2024-04-24 08:58:51,990] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 09:00:51 policy-pap | acks = -1 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.008189775Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 09:00:51 kafka | [2024-04-24 08:58:51,990] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-pap | auto.include.jmx.reporter = true 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.00921601Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.026615ms 09:00:51 kafka | [2024-04-24 08:58:51,991] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-pap | batch.size = 16384 09:00:51 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.015367955Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 09:00:51 kafka | [2024-04-24 08:58:51,997] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-pap | bootstrap.servers = [kafka:9092] 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.01633435Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=965.635µs 09:00:51 kafka | [2024-04-24 08:58:51,997] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-pap | buffer.memory = 33554432 09:00:51 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 09:00:51 kafka | [2024-04-24 08:58:51,997] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 09:00:51 policy-pap | client.dns.lookup = use_all_dns_ips 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.019327749Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 09:00:51 kafka | [2024-04-24 08:58:51,997] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-pap | client.id = producer-2 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.020435128Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.106449ms 09:00:51 kafka | [2024-04-24 08:58:51,998] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-pap | compression.type = none 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.024883261Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 09:00:51 kafka | [2024-04-24 08:58:52,005] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-pap | connections.max.idle.ms = 540000 09:00:51 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.02602794Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.139589ms 09:00:51 kafka | [2024-04-24 08:58:52,006] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-pap | delivery.timeout.ms = 120000 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.029263954Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 09:00:51 kafka | [2024-04-24 08:58:52,006] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 09:00:51 policy-pap | enable.idempotence = true 09:00:51 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.030418742Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.154668ms 09:00:51 kafka | [2024-04-24 08:58:52,006] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-pap | interceptor.classes = [] 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.03335012Z level=info msg="Executing migration" id="add index dashboard_permission" 09:00:51 kafka | [2024-04-24 08:58:52,006] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.034364647Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.014927ms 09:00:51 kafka | [2024-04-24 08:58:52,014] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-pap | linger.ms = 0 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.037239874Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 09:00:51 kafka | [2024-04-24 08:58:52,015] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-pap | max.block.ms = 60000 09:00:51 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.037635301Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=397.207µs 09:00:51 kafka | [2024-04-24 08:58:52,015] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 09:00:51 policy-pap | max.in.flight.requests.per.connection = 5 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:52,015] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-pap | max.request.size = 1048576 09:00:51 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 09:00:51 kafka | [2024-04-24 08:58:52,015] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-pap | metadata.max.age.ms = 300000 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:52,021] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-pap | metadata.max.idle.ms = 300000 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.043477487Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:52,021] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-pap | metric.reporters = [] 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.04367149Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=194.253µs 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:52,021] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 09:00:51 policy-pap | metrics.num.samples = 2 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.045741394Z level=info msg="Executing migration" id="create tag table" 09:00:51 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 09:00:51 kafka | [2024-04-24 08:58:52,022] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-pap | metrics.recording.level = INFO 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.046377525Z level=info msg="Migration successfully executed" id="create tag table" duration=635.591µs 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:52,022] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-pap | metrics.sample.window.ms = 30000 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.050284389Z level=info msg="Executing migration" id="add index tag.key_value" 09:00:51 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 09:00:51 kafka | [2024-04-24 08:58:52,028] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-pap | partitioner.adaptive.partitioning.enable = true 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.051717103Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.432604ms 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:52,028] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-pap | partitioner.availability.timeout.ms = 0 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.055297142Z level=info msg="Executing migration" id="create login attempt table" 09:00:51 kafka | [2024-04-24 08:58:52,028] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 09:00:51 policy-pap | partitioner.class = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.055955062Z level=info msg="Migration successfully executed" id="create login attempt table" duration=657.41µs 09:00:51 policy-db-migrator | 09:00:51 policy-pap | partitioner.ignore.keys = false 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.059293058Z level=info msg="Executing migration" id="add index login_attempt.username" 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:52,029] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-pap | receive.buffer.bytes = 32768 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.060319984Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.023506ms 09:00:51 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 09:00:51 kafka | [2024-04-24 08:58:52,029] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-pap | reconnect.backoff.max.ms = 1000 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.064487293Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:52,035] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-pap | reconnect.backoff.ms = 50 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.065613292Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.126449ms 09:00:51 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 09:00:51 kafka | [2024-04-24 08:58:52,036] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.068864644Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | request.timeout.ms = 30000 09:00:51 kafka | [2024-04-24 08:58:52,036] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.088164762Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=19.300008ms 09:00:51 policy-db-migrator | 09:00:51 policy-pap | retries = 2147483647 09:00:51 kafka | [2024-04-24 08:58:52,036] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.114570687Z level=info msg="Executing migration" id="create login_attempt v2" 09:00:51 policy-db-migrator | 09:00:51 policy-pap | retry.backoff.ms = 100 09:00:51 kafka | [2024-04-24 08:58:52,036] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.116106052Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.534026ms 09:00:51 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.121677444Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 09:00:51 policy-pap | sasl.client.callback.handler.class = null 09:00:51 kafka | [2024-04-24 08:58:52,045] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.123356901Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.674807ms 09:00:51 policy-pap | sasl.jaas.config = null 09:00:51 kafka | [2024-04-24 08:58:52,046] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:00:51 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.127308556Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 09:00:51 kafka | [2024-04-24 08:58:52,047] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 09:00:51 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.127896696Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=588.52µs 09:00:51 kafka | [2024-04-24 08:58:52,047] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-pap | sasl.kerberos.service.name = null 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.131439674Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 09:00:51 kafka | [2024-04-24 08:58:52,047] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.132106475Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=666.401µs 09:00:51 kafka | [2024-04-24 08:58:52,056] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 09:00:51 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.145041168Z level=info msg="Executing migration" id="create user auth table" 09:00:51 kafka | [2024-04-24 08:58:52,057] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-pap | sasl.login.callback.handler.class = null 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.146469481Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.424853ms 09:00:51 kafka | [2024-04-24 08:58:52,057] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 09:00:51 policy-pap | sasl.login.class = null 09:00:51 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.182140998Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 09:00:51 kafka | [2024-04-24 08:58:52,057] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-pap | sasl.login.connect.timeout.ms = null 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.183903747Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.735048ms 09:00:51 kafka | [2024-04-24 08:58:52,057] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-pap | sasl.login.read.timeout.ms = null 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.207318352Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 09:00:51 kafka | [2024-04-24 08:58:52,063] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-pap | sasl.login.refresh.buffer.seconds = 300 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.207568836Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=250.924µs 09:00:51 kafka | [2024-04-24 08:58:52,064] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-pap | sasl.login.refresh.min.period.seconds = 60 09:00:51 policy-db-migrator | > upgrade 0100-pdp.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.232672009Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 09:00:51 kafka | [2024-04-24 08:58:52,064] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 09:00:51 policy-pap | sasl.login.refresh.window.factor = 0.8 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.241964601Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=9.289362ms 09:00:51 kafka | [2024-04-24 08:58:52,064] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-pap | sasl.login.refresh.window.jitter = 0.05 09:00:51 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.254141842Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 09:00:51 kafka | [2024-04-24 08:58:52,064] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-pap | sasl.login.retry.backoff.max.ms = 10000 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.260916534Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=6.780272ms 09:00:51 kafka | [2024-04-24 08:58:52,071] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-pap | sasl.login.retry.backoff.ms = 100 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.269414444Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 09:00:51 kafka | [2024-04-24 08:58:52,072] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-pap | sasl.mechanism = GSSAPI 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.272978622Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.572838ms 09:00:51 kafka | [2024-04-24 08:58:52,072] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 09:00:51 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 09:00:51 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.285122042Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 09:00:51 kafka | [2024-04-24 08:58:52,072] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-pap | sasl.oauthbearer.expected.audience = null 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.290560291Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.441509ms 09:00:51 kafka | [2024-04-24 08:58:52,073] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-pap | sasl.oauthbearer.expected.issuer = null 09:00:51 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.296662782Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 09:00:51 kafka | [2024-04-24 08:58:52,080] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.298216017Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.552835ms 09:00:51 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.302443697Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 09:00:51 kafka | [2024-04-24 08:58:52,081] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.307796175Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.352008ms 09:00:51 kafka | [2024-04-24 08:58:52,081] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 09:00:51 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 09:00:51 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.312275528Z level=info msg="Executing migration" id="create server_lock table" 09:00:51 kafka | [2024-04-24 08:58:52,081] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-pap | sasl.oauthbearer.scope.claim.name = scope 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.313144363Z level=info msg="Migration successfully executed" id="create server_lock table" duration=869.264µs 09:00:51 kafka | [2024-04-24 08:58:52,081] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-pap | sasl.oauthbearer.sub.claim.name = sub 09:00:51 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.321595781Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 09:00:51 kafka | [2024-04-24 08:58:52,093] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-pap | sasl.oauthbearer.token.endpoint.url = null 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.323033615Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.444114ms 09:00:51 kafka | [2024-04-24 08:58:52,094] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-pap | security.protocol = PLAINTEXT 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.330961886Z level=info msg="Executing migration" id="create user auth token table" 09:00:51 kafka | [2024-04-24 08:58:52,094] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 09:00:51 policy-pap | security.providers = null 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.332015013Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.055837ms 09:00:51 kafka | [2024-04-24 08:58:52,094] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-pap | send.buffer.bytes = 131072 09:00:51 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.337871439Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 09:00:51 kafka | [2024-04-24 08:58:52,094] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-pap | socket.connection.setup.timeout.max.ms = 30000 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.338768634Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=897.575µs 09:00:51 kafka | [2024-04-24 08:58:52,102] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-pap | socket.connection.setup.timeout.ms = 10000 09:00:51 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.342863092Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 09:00:51 kafka | [2024-04-24 08:58:52,103] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-pap | ssl.cipher.suites = null 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.343772077Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=909.335µs 09:00:51 kafka | [2024-04-24 08:58:52,103] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 09:00:51 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.34701723Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 09:00:51 kafka | [2024-04-24 08:58:52,103] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-pap | ssl.endpoint.identification.algorithm = https 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.348005896Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=988.686µs 09:00:51 kafka | [2024-04-24 08:58:52,103] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-pap | ssl.engine.factory.class = null 09:00:51 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 09:00:51 kafka | [2024-04-24 08:58:52,113] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.352778285Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 09:00:51 policy-pap | ssl.key.password = null 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.358531309Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.752774ms 09:00:51 kafka | [2024-04-24 08:58:52,114] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-pap | ssl.keymanager.algorithm = SunX509 09:00:51 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.36283987Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 09:00:51 kafka | [2024-04-24 08:58:52,114] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 09:00:51 policy-pap | ssl.keystore.certificate.chain = null 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.363776835Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=934.145µs 09:00:51 kafka | [2024-04-24 08:58:52,114] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-pap | ssl.keystore.key = null 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.368746868Z level=info msg="Executing migration" id="create cache_data table" 09:00:51 kafka | [2024-04-24 08:58:52,114] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-pap | ssl.keystore.location = null 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.369568581Z level=info msg="Migration successfully executed" id="create cache_data table" duration=821.833µs 09:00:51 kafka | [2024-04-24 08:58:52,121] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-pap | ssl.keystore.password = null 09:00:51 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.374495252Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 09:00:51 kafka | [2024-04-24 08:58:52,121] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-pap | ssl.keystore.type = JKS 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.375402047Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=906.615µs 09:00:51 kafka | [2024-04-24 08:58:52,121] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 09:00:51 policy-pap | ssl.protocol = TLSv1.3 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.379968772Z level=info msg="Executing migration" id="create short_url table v1" 09:00:51 kafka | [2024-04-24 08:58:52,121] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-pap | ssl.provider = null 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.381029779Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.062107ms 09:00:51 policy-pap | ssl.secure.random.implementation = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.384271763Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 09:00:51 kafka | [2024-04-24 08:58:52,121] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 09:00:51 policy-pap | ssl.trustmanager.algorithm = PKIX 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.385428351Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.156768ms 09:00:51 kafka | [2024-04-24 08:58:52,128] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | ssl.truststore.certificates = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.3913824Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 09:00:51 kafka | [2024-04-24 08:58:52,128] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 09:00:51 policy-pap | ssl.truststore.location = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.391584083Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=223.144µs 09:00:51 kafka | [2024-04-24 08:58:52,128] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | ssl.truststore.password = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.394811006Z level=info msg="Executing migration" id="delete alert_definition table" 09:00:51 kafka | [2024-04-24 08:58:52,128] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | 09:00:51 policy-pap | ssl.truststore.type = JKS 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.394904278Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=93.722µs 09:00:51 kafka | [2024-04-24 08:58:52,128] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 policy-pap | transaction.timeout.ms = 60000 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.401014208Z level=info msg="Executing migration" id="recreate alert_definition table" 09:00:51 kafka | [2024-04-24 08:58:52,134] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 09:00:51 policy-pap | transactional.id = null 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.402034005Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.021627ms 09:00:51 kafka | [2024-04-24 08:58:52,134] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.405159177Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 09:00:51 kafka | [2024-04-24 08:58:52,134] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 09:00:51 policy-pap | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.405908039Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=745.762µs 09:00:51 kafka | [2024-04-24 08:58:52,134] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:58:50.690+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.411529541Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 09:00:51 kafka | [2024-04-24 08:58:52,134] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-db-migrator | 09:00:51 policy-pap | [2024-04-24T08:58:50.692+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.41328035Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.752879ms 09:00:51 kafka | [2024-04-24 08:58:52,141] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-db-migrator | 09:00:51 policy-pap | [2024-04-24T08:58:50.692+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.416628825Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 09:00:51 kafka | [2024-04-24 08:58:52,141] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 09:00:51 policy-pap | [2024-04-24T08:58:50.692+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713949130692 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.416695326Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=67.431µs 09:00:51 kafka | [2024-04-24 08:58:52,141] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:58:50.692+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=0583e7e0-8980-4e61-8167-9e42f04d3bdd, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.421634777Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 09:00:51 kafka | [2024-04-24 08:58:52,141] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 09:00:51 policy-pap | [2024-04-24T08:58:50.692+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.422576832Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=942.195µs 09:00:51 kafka | [2024-04-24 08:58:52,142] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-db-migrator | JOIN pdpstatistics b 09:00:51 policy-pap | [2024-04-24T08:58:50.692+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.426313734Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 09:00:51 kafka | [2024-04-24 08:58:52,149] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 09:00:51 policy-pap | [2024-04-24T08:58:50.696+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.427215199Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=904.315µs 09:00:51 kafka | [2024-04-24 08:58:52,149] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-db-migrator | SET a.id = b.id 09:00:51 policy-pap | [2024-04-24T08:58:50.696+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.431401658Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 09:00:51 kafka | [2024-04-24 08:58:52,150] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 09:00:51 policy-pap | [2024-04-24T08:58:50.702+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.432321443Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=922.145µs 09:00:51 policy-pap | [2024-04-24T08:58:50.706+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.436831017Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:52,150] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-pap | [2024-04-24T08:58:50.706+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.437769452Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=938.405µs 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:52,150] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:58:50.706+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.442241687Z level=info msg="Executing migration" id="Add column paused in alert_definition" 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:52,157] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-pap | [2024-04-24T08:58:50.707+00:00|INFO|ServiceManager|main] Policy PAP started 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.449029398Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.792821ms 09:00:51 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 09:00:51 kafka | [2024-04-24 08:58:52,157] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-pap | [2024-04-24T08:58:50.708+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.452528375Z level=info msg="Executing migration" id="drop alert_definition table" 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:58:50.708+00:00|INFO|TimerManager|Thread-9] timer manager update started 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.453456711Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=928.376µs 09:00:51 kafka | [2024-04-24 08:58:52,157] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 09:00:51 policy-pap | [2024-04-24T08:58:50.708+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.893 seconds (process running for 10.499) 09:00:51 kafka | [2024-04-24 08:58:52,157] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.45953079Z level=info msg="Executing migration" id="delete alert_definition_version table" 09:00:51 policy-pap | [2024-04-24T08:58:51.119+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.459612541Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=82.391µs 09:00:51 kafka | [2024-04-24 08:58:52,157] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:58:51.119+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: FWpz7Mn1RFGDoEChXT3QPg 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.462404727Z level=info msg="Executing migration" id="recreate alert_definition_version table" 09:00:51 kafka | [2024-04-24 08:58:52,164] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-pap | [2024-04-24T08:58:51.119+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: FWpz7Mn1RFGDoEChXT3QPg 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.463826051Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.418954ms 09:00:51 kafka | [2024-04-24 08:58:52,164] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-pap | [2024-04-24T08:58:51.123+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: FWpz7Mn1RFGDoEChXT3QPg 09:00:51 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.510959047Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 09:00:51 kafka | [2024-04-24 08:58:52,164] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 09:00:51 policy-pap | [2024-04-24T08:58:51.161+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:52,164] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-pap | [2024-04-24T08:58:51.161+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Cluster ID: FWpz7Mn1RFGDoEChXT3QPg 09:00:51 policy-pap | [2024-04-24T08:58:51.230+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 kafka | [2024-04-24 08:58:52,164] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:58:51.233+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 09:00:51 kafka | [2024-04-24 08:58:52,176] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.513341536Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=2.385908ms 09:00:51 policy-pap | [2024-04-24T08:58:51.233+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:52,176] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.519747471Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 09:00:51 policy-pap | [2024-04-24T08:58:51.303+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:52,177] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.520826539Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.079078ms 09:00:51 policy-pap | [2024-04-24T08:58:51.340+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:52,177] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.526200907Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 09:00:51 policy-pap | [2024-04-24T08:58:51.409+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 09:00:51 kafka | [2024-04-24 08:58:52,177] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.52638293Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=144.743µs 09:00:51 policy-pap | [2024-04-24T08:58:51.446+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:52,185] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.534432872Z level=info msg="Executing migration" id="drop alert_definition_version table" 09:00:51 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 09:00:51 policy-pap | [2024-04-24T08:58:51.515+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 kafka | [2024-04-24 08:58:52,185] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.536350354Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.918202ms 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:58:51.553+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 kafka | [2024-04-24 08:58:52,185] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.541901475Z level=info msg="Executing migration" id="create alert_instance table" 09:00:51 policy-db-migrator | 09:00:51 policy-pap | [2024-04-24T08:58:51.622+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 kafka | [2024-04-24 08:58:52,185] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.543510182Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.610267ms 09:00:51 policy-pap | [2024-04-24T08:58:51.659+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 kafka | [2024-04-24 08:58:52,185] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-db-migrator | > upgrade 0210-sequence.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.547973036Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 09:00:51 policy-pap | [2024-04-24T08:58:51.727+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 kafka | [2024-04-24 08:58:52,197] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.549660643Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.690997ms 09:00:51 policy-pap | [2024-04-24T08:58:51.765+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 kafka | [2024-04-24 08:58:52,198] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.555439758Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 09:00:51 policy-pap | [2024-04-24T08:58:51.832+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 kafka | [2024-04-24 08:58:52,198] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.556543786Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.103958ms 09:00:51 policy-pap | [2024-04-24T08:58:51.869+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 kafka | [2024-04-24 08:58:52,198] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.560907768Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 09:00:51 policy-pap | [2024-04-24T08:58:51.939+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 kafka | [2024-04-24 08:58:52,198] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.567284883Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.375435ms 09:00:51 policy-pap | [2024-04-24T08:58:51.975+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 kafka | [2024-04-24 08:58:52,231] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.619909408Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 09:00:51 policy-pap | [2024-04-24T08:58:52.044+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 kafka | [2024-04-24 08:58:52,232] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-db-migrator | > upgrade 0220-sequence.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.621127338Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.22232ms 09:00:51 policy-pap | [2024-04-24T08:58:52.079+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 kafka | [2024-04-24 08:58:52,232] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.625197496Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 09:00:51 policy-pap | [2024-04-24T08:58:52.152+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 kafka | [2024-04-24 08:58:52,232] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.626201422Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.004196ms 09:00:51 policy-pap | [2024-04-24T08:58:52.188+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 22 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 kafka | [2024-04-24 08:58:52,233] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.6388504Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 09:00:51 policy-pap | [2024-04-24T08:58:52.261+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 22 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 kafka | [2024-04-24 08:58:52,239] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.670324917Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=31.475937ms 09:00:51 policy-pap | [2024-04-24T08:58:52.292+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 24 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 kafka | [2024-04-24 08:58:52,240] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.674755601Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 09:00:51 policy-pap | [2024-04-24T08:58:52.372+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 24 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 kafka | [2024-04-24 08:58:52,240] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.700814889Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=26.060698ms 09:00:51 policy-pap | [2024-04-24T08:58:52.396+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 26 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 kafka | [2024-04-24 08:58:52,240] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.704358278Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 09:00:51 policy-pap | [2024-04-24T08:58:52.480+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Error while fetching metadata with correlation id 26 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:00:51 kafka | [2024-04-24 08:58:52,240] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.705359954Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.001096ms 09:00:51 policy-pap | [2024-04-24T08:58:52.534+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 09:00:51 kafka | [2024-04-24 08:58:52,248] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.709106566Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 09:00:51 policy-pap | [2024-04-24T08:58:52.540+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 09:00:51 kafka | [2024-04-24 08:58:52,248] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.710146373Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.039667ms 09:00:51 policy-pap | [2024-04-24T08:58:52.565+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-b2dc9f1d-b06d-4078-927e-cc7dc2d2688c 09:00:51 kafka | [2024-04-24 08:58:52,249] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.718565481Z level=info msg="Executing migration" id="add current_reason column related to current_state" 09:00:51 policy-pap | [2024-04-24T08:58:52.566+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 09:00:51 kafka | [2024-04-24 08:58:52,249] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.726825867Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=8.252636ms 09:00:51 policy-pap | [2024-04-24T08:58:52.566+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 09:00:51 kafka | [2024-04-24 08:58:52,249] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(UfYjnzzkRPeYang4gRgPIg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.731009456Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 09:00:51 policy-pap | [2024-04-24T08:58:52.583+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 09:00:51 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 09:00:51 kafka | [2024-04-24 08:58:52,255] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.740350679Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=9.340823ms 09:00:51 policy-pap | [2024-04-24T08:58:52.586+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] (Re-)joining group 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:52,256] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.743826736Z level=info msg="Executing migration" id="create alert_rule table" 09:00:51 policy-pap | [2024-04-24T08:58:52.598+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Request joining group due to: need to re-join with the given member-id: consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3-2e3abf31-158b-4904-8a97-f271619f738d 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:52,256] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.74524688Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.416814ms 09:00:51 policy-pap | [2024-04-24T08:58:52.598+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:52,256] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.74953087Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 09:00:51 policy-pap | [2024-04-24T08:58:52.598+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] (Re-)joining group 09:00:51 policy-db-migrator | > upgrade 0120-toscatrigger.sql 09:00:51 kafka | [2024-04-24 08:58:52,256] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.750592838Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.061508ms 09:00:51 policy-pap | [2024-04-24T08:58:55.590+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-b2dc9f1d-b06d-4078-927e-cc7dc2d2688c', protocol='range'} 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:52,277] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.753801541Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 09:00:51 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 09:00:51 policy-pap | [2024-04-24T08:58:55.599+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-b2dc9f1d-b06d-4078-927e-cc7dc2d2688c=Assignment(partitions=[policy-pdp-pap-0])} 09:00:51 kafka | [2024-04-24 08:58:52,278] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.754942599Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.140688ms 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:58:55.605+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Successfully joined group with generation Generation{generationId=1, memberId='consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3-2e3abf31-158b-4904-8a97-f271619f738d', protocol='range'} 09:00:51 kafka | [2024-04-24 08:58:52,278] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.758561319Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 09:00:51 policy-db-migrator | 09:00:51 policy-pap | [2024-04-24T08:58:55.606+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Finished assignment for group at generation 1: {consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3-2e3abf31-158b-4904-8a97-f271619f738d=Assignment(partitions=[policy-pdp-pap-0])} 09:00:51 kafka | [2024-04-24 08:58:52,278] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.75985723Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.295441ms 09:00:51 policy-db-migrator | 09:00:51 policy-pap | [2024-04-24T08:58:55.622+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-b2dc9f1d-b06d-4078-927e-cc7dc2d2688c', protocol='range'} 09:00:51 kafka | [2024-04-24 08:58:52,278] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.764292334Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 09:00:51 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 09:00:51 policy-pap | [2024-04-24T08:58:55.622+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Successfully synced group in generation Generation{generationId=1, memberId='consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3-2e3abf31-158b-4904-8a97-f271619f738d', protocol='range'} 09:00:51 kafka | [2024-04-24 08:58:52,286] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.764386515Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=94.611µs 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:58:55.622+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 09:00:51 kafka | [2024-04-24 08:58:52,287] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.767658358Z level=info msg="Executing migration" id="add column for to alert_rule" 09:00:51 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 09:00:51 policy-pap | [2024-04-24T08:58:55.622+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 09:00:51 kafka | [2024-04-24 08:58:52,287] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.77382165Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.161852ms 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:58:55.625+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Adding newly assigned partitions: policy-pdp-pap-0 09:00:51 kafka | [2024-04-24 08:58:52,287] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.777883907Z level=info msg="Executing migration" id="add column annotations to alert_rule" 09:00:51 policy-db-migrator | 09:00:51 policy-pap | [2024-04-24T08:58:55.626+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 09:00:51 kafka | [2024-04-24 08:58:52,287] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.78297086Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=5.086923ms 09:00:51 policy-db-migrator | 09:00:51 policy-pap | [2024-04-24T08:58:55.661+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Found no committed offset for partition policy-pdp-pap-0 09:00:51 kafka | [2024-04-24 08:58:52,295] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.787417834Z level=info msg="Executing migration" id="add column labels to alert_rule" 09:00:51 policy-db-migrator | > upgrade 0140-toscaparameter.sql 09:00:51 policy-pap | [2024-04-24T08:58:55.662+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 09:00:51 kafka | [2024-04-24 08:58:52,295] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.791600003Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=4.181539ms 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:58:55.674+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3, groupId=c2598a93-7b5f-4e4e-b23a-b864ffd9a18a] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 09:00:51 kafka | [2024-04-24 08:58:52,295] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.79508835Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 09:00:51 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 09:00:51 policy-pap | [2024-04-24T08:58:55.675+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 09:00:51 kafka | [2024-04-24 08:58:52,295] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.795771842Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=681.572µs 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:58:59.736+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 09:00:51 kafka | [2024-04-24 08:58:52,296] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.798886432Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 09:00:51 policy-db-migrator | 09:00:51 policy-pap | [2024-04-24T08:58:59.737+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 09:00:51 kafka | [2024-04-24 08:58:52,306] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.799753776Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=866.804µs 09:00:51 policy-db-migrator | 09:00:51 policy-pap | [2024-04-24T08:58:59.738+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 1 ms 09:00:51 kafka | [2024-04-24 08:58:52,307] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.804587686Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 09:00:51 policy-db-migrator | > upgrade 0150-toscaproperty.sql 09:00:51 policy-pap | [2024-04-24T08:59:12.418+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 09:00:51 kafka | [2024-04-24 08:58:52,307] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.808818136Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=4.23165ms 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [] 09:00:51 kafka | [2024-04-24 08:58:52,307] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.812038589Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 09:00:51 policy-pap | [2024-04-24T08:59:12.419+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:00:51 kafka | [2024-04-24 08:58:52,307] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.816521402Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.482153ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.82063321Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 09:00:51 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4e120101-1cee-4165-9b1d-d46c107a0c1e","timestampMs":1713949152385,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup"} 09:00:51 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 09:00:51 kafka | [2024-04-24 08:58:52,317] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.821633526Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.000036ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.826044209Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 09:00:51 policy-pap | [2024-04-24T08:59:12.419+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:52,318] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.833635864Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=7.590875ms 09:00:51 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4e120101-1cee-4165-9b1d-d46c107a0c1e","timestampMs":1713949152385,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup"} 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:52,318] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.837139432Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 09:00:51 policy-pap | [2024-04-24T08:59:12.426+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:52,318] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.843029558Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.893076ms 09:00:51 policy-pap | [2024-04-24T08:59:12.497+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate starting 09:00:51 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 09:00:51 kafka | [2024-04-24 08:58:52,318] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.945620476Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 09:00:51 policy-pap | [2024-04-24T08:59:12.497+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate starting listener 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:52,327] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.945791039Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=175.183µs 09:00:51 policy-pap | [2024-04-24T08:59:12.497+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate starting timer 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:52,328] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.951017354Z level=info msg="Executing migration" id="create alert_rule_version table" 09:00:51 policy-pap | [2024-04-24T08:59:12.498+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=77293ae2-da7e-415d-9361-5e79c680736b, expireMs=1713949182498] 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:52,329] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.952910036Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.892832ms 09:00:51 policy-pap | [2024-04-24T08:59:12.499+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate starting enqueue 09:00:51 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 09:00:51 kafka | [2024-04-24 08:58:52,329] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.958715181Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 09:00:51 policy-pap | [2024-04-24T08:59:12.499+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=77293ae2-da7e-415d-9361-5e79c680736b, expireMs=1713949182498] 09:00:51 kafka | [2024-04-24 08:58:52,329] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.959780839Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.065468ms 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:59:12.501+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 09:00:51 kafka | [2024-04-24 08:58:52,342] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.963222516Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 09:00:51 policy-pap | {"source":"pap-43e719fa-ff69-4964-bc31-d2528becc332","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"77293ae2-da7e-415d-9361-5e79c680736b","timestampMs":1713949152480,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:00:51 kafka | [2024-04-24 08:58:52,344] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.964308243Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.085317ms 09:00:51 policy-pap | [2024-04-24T08:59:12.501+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate started 09:00:51 kafka | [2024-04-24 08:58:52,344] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.968731977Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 09:00:51 policy-pap | [2024-04-24T08:59:12.534+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:00:51 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.968814728Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=83.271µs 09:00:51 policy-pap | {"source":"pap-43e719fa-ff69-4964-bc31-d2528becc332","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"77293ae2-da7e-415d-9361-5e79c680736b","timestampMs":1713949152480,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:00:51 kafka | [2024-04-24 08:58:52,345] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.972045Z level=info msg="Executing migration" id="add column for to alert_rule_version" 09:00:51 kafka | [2024-04-24 08:58:52,345] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:59:12.535+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:14.981893132Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=9.848222ms 09:00:51 kafka | [2024-04-24 08:58:52,351] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 09:00:51 policy-pap | [2024-04-24T08:59:12.535+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.038606751Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 09:00:51 kafka | [2024-04-24 08:58:52,353] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.044785012Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.181501ms 09:00:51 policy-pap | {"source":"pap-43e719fa-ff69-4964-bc31-d2528becc332","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"77293ae2-da7e-415d-9361-5e79c680736b","timestampMs":1713949152480,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:00:51 kafka | [2024-04-24 08:58:52,353] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.052052232Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 09:00:51 policy-pap | [2024-04-24T08:59:12.535+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 09:00:51 kafka | [2024-04-24 08:58:52,353] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.060487059Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=8.424728ms 09:00:51 policy-pap | [2024-04-24T08:59:12.553+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:00:51 kafka | [2024-04-24 08:58:52,353] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.065356989Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 09:00:51 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a213e342-fe55-4aeb-87b1-3b23ade78ea0","timestampMs":1713949152545,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup"} 09:00:51 kafka | [2024-04-24 08:58:52,362] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.071642501Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.285672ms 09:00:51 policy-pap | [2024-04-24T08:59:12.553+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 09:00:51 kafka | [2024-04-24 08:58:52,364] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.10036519Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 09:00:51 policy-pap | [2024-04-24T08:59:12.556+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:00:51 kafka | [2024-04-24 08:58:52,364] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.109233055Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=8.869455ms 09:00:51 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"77293ae2-da7e-415d-9361-5e79c680736b","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"7432afad-c26c-427a-97c6-ce2c56947811","timestampMs":1713949152547,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:00:51 kafka | [2024-04-24 08:58:52,365] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.113885081Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 09:00:51 policy-pap | [2024-04-24T08:59:12.557+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate stopping 09:00:51 kafka | [2024-04-24 08:58:52,365] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.114045783Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=160.302µs 09:00:51 policy-pap | [2024-04-24T08:59:12.557+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate stopping enqueue 09:00:51 kafka | [2024-04-24 08:58:52,379] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.117572521Z level=info msg="Executing migration" id=create_alert_configuration_table 09:00:51 policy-pap | [2024-04-24T08:59:12.557+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate stopping timer 09:00:51 kafka | [2024-04-24 08:58:52,380] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.118477375Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=904.274µs 09:00:51 kafka | [2024-04-24 08:58:52,380] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:59:12.557+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.132666938Z level=info msg="Executing migration" id="Add column default in alert_configuration" 09:00:51 kafka | [2024-04-24 08:58:52,381] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | 09:00:51 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a213e342-fe55-4aeb-87b1-3b23ade78ea0","timestampMs":1713949152545,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup"} 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.142773022Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=10.105674ms 09:00:51 kafka | [2024-04-24 08:58:52,381] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:59:12.558+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=77293ae2-da7e-415d-9361-5e79c680736b, expireMs=1713949182498] 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.148607628Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 09:00:51 kafka | [2024-04-24 08:58:52,392] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 09:00:51 policy-pap | [2024-04-24T08:59:12.558+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate stopping listener 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.14869397Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=86.162µs 09:00:51 kafka | [2024-04-24 08:58:52,393] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:59:12.558+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate stopped 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.1524392Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 09:00:51 kafka | [2024-04-24 08:58:52,393] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.157056516Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.616526ms 09:00:51 policy-db-migrator | 09:00:51 policy-pap | [2024-04-24T08:59:12.562+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate successful 09:00:51 kafka | [2024-04-24 08:58:52,394] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.160108776Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 09:00:51 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 09:00:51 policy-pap | [2024-04-24T08:59:12.563+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 start publishing next request 09:00:51 kafka | [2024-04-24 08:58:52,394] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.161191683Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.082807ms 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:59:12.563+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpStateChange starting 09:00:51 kafka | [2024-04-24 08:58:52,404] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.16468685Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 09:00:51 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 09:00:51 policy-pap | [2024-04-24T08:59:12.563+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpStateChange starting listener 09:00:51 kafka | [2024-04-24 08:58:52,406] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.17142434Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.73687ms 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:59:12.563+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpStateChange starting timer 09:00:51 kafka | [2024-04-24 08:58:52,406] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.175914844Z level=info msg="Executing migration" id=create_ngalert_configuration_table 09:00:51 policy-db-migrator | 09:00:51 policy-pap | [2024-04-24T08:59:12.563+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=c5968f1a-b7af-452f-bf63-1bacb67aef0f, expireMs=1713949182563] 09:00:51 kafka | [2024-04-24 08:58:52,407] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.176566784Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=650.45µs 09:00:51 policy-db-migrator | 09:00:51 policy-pap | [2024-04-24T08:59:12.563+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpStateChange starting enqueue 09:00:51 kafka | [2024-04-24 08:58:52,407] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.181633247Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 09:00:51 policy-db-migrator | > upgrade 0100-upgrade.sql 09:00:51 policy-pap | [2024-04-24T08:59:12.563+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpStateChange started 09:00:51 kafka | [2024-04-24 08:58:52,418] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.183302644Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.667487ms 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:59:12.563+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=c5968f1a-b7af-452f-bf63-1bacb67aef0f, expireMs=1713949182563] 09:00:51 kafka | [2024-04-24 08:58:52,420] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.188325256Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 09:00:51 policy-db-migrator | select 'upgrade to 1100 completed' as msg 09:00:51 policy-pap | [2024-04-24T08:59:12.563+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 09:00:51 kafka | [2024-04-24 08:58:52,420] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.195389221Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=7.065035ms 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | {"source":"pap-43e719fa-ff69-4964-bc31-d2528becc332","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"c5968f1a-b7af-452f-bf63-1bacb67aef0f","timestampMs":1713949152481,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:00:51 kafka | [2024-04-24 08:58:52,425] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.199102943Z level=info msg="Executing migration" id="create provenance_type table" 09:00:51 policy-db-migrator | 09:00:51 policy-pap | [2024-04-24T08:59:12.605+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.199681221Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=577.888µs 09:00:51 kafka | [2024-04-24 08:58:52,425] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-db-migrator | msg 09:00:51 policy-pap | {"source":"pap-43e719fa-ff69-4964-bc31-d2528becc332","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"c5968f1a-b7af-452f-bf63-1bacb67aef0f","timestampMs":1713949152481,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:00:51 kafka | [2024-04-24 08:58:52,435] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.207188634Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 09:00:51 policy-db-migrator | upgrade to 1100 completed 09:00:51 policy-pap | [2024-04-24T08:59:12.605+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 09:00:51 kafka | [2024-04-24 08:58:52,436] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.208549066Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.363712ms 09:00:51 policy-db-migrator | 09:00:51 policy-pap | [2024-04-24T08:59:12.611+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:00:51 kafka | [2024-04-24 08:58:52,436] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.21369334Z level=info msg="Executing migration" id="create alert_image table" 09:00:51 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 09:00:51 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"c5968f1a-b7af-452f-bf63-1bacb67aef0f","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"fe7846fa-1b6b-47d0-a2a9-3907eb9b0f7a","timestampMs":1713949152576,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:00:51 kafka | [2024-04-24 08:58:52,436] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.214355451Z level=info msg="Migration successfully executed" id="create alert_image table" duration=663.111µs 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:59:12.658+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpStateChange stopping 09:00:51 kafka | [2024-04-24 08:58:52,436] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 09:00:51 policy-pap | [2024-04-24T08:59:12.658+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpStateChange stopping enqueue 09:00:51 kafka | [2024-04-24 08:58:52,443] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.218481298Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:59:12.658+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpStateChange stopping timer 09:00:51 kafka | [2024-04-24 08:58:52,444] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.21916781Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=686.482µs 09:00:51 policy-db-migrator | 09:00:51 policy-pap | [2024-04-24T08:59:12.658+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=c5968f1a-b7af-452f-bf63-1bacb67aef0f, expireMs=1713949182563] 09:00:51 kafka | [2024-04-24 08:58:52,444] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.222263251Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 09:00:51 policy-db-migrator | 09:00:51 policy-pap | [2024-04-24T08:59:12.658+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpStateChange stopping listener 09:00:51 kafka | [2024-04-24 08:58:52,444] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.222313112Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=47.761µs 09:00:51 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 09:00:51 policy-pap | [2024-04-24T08:59:12.658+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpStateChange stopped 09:00:51 kafka | [2024-04-24 08:58:52,444] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(3d7pexomSuav55xzl5U12w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.22653042Z level=info msg="Executing migration" id=create_alert_configuration_history_table 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:59:12.658+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpStateChange successful 09:00:51 kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.227196621Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=666.101µs 09:00:51 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 09:00:51 policy-pap | [2024-04-24T08:59:12.658+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 start publishing next request 09:00:51 kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.233612796Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:59:12.658+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate starting 09:00:51 kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.235335764Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.726878ms 09:00:51 policy-db-migrator | 09:00:51 policy-pap | [2024-04-24T08:59:12.658+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate starting listener 09:00:51 kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.240228344Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:59:12.659+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate starting timer 09:00:51 kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.2406198Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 09:00:51 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 09:00:51 policy-pap | [2024-04-24T08:59:12.659+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=e1bfc2a1-b68d-4b0d-960e-f7897689b4f6, expireMs=1713949182659] 09:00:51 kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.244497973Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:59:12.659+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate starting enqueue 09:00:51 kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.244931361Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=433.448µs 09:00:51 policy-db-migrator | 09:00:51 policy-pap | [2024-04-24T08:59:12.659+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate started 09:00:51 kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.248101403Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:12.659+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.249142449Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.040376ms 09:00:51 kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 09:00:51 policy-pap | {"source":"pap-43e719fa-ff69-4964-bc31-d2528becc332","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"e1bfc2a1-b68d-4b0d-960e-f7897689b4f6","timestampMs":1713949152597,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.252201529Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 09:00:51 policy-db-migrator | > upgrade 0120-audit_sequence.sql 09:00:51 kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:12.663+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.259356177Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=7.154168ms 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 09:00:51 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"77293ae2-da7e-415d-9361-5e79c680736b","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"7432afad-c26c-427a-97c6-ce2c56947811","timestampMs":1713949152547,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 09:00:51 kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.263964381Z level=info msg="Executing migration" id="create library_element table v1" 09:00:51 policy-pap | [2024-04-24T08:59:12.663+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 77293ae2-da7e-415d-9361-5e79c680736b 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.26570803Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.743829ms 09:00:51 policy-pap | [2024-04-24T08:59:12.668+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:00:51 policy-db-migrator | 09:00:51 kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.2730945Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 09:00:51 policy-pap | {"source":"pap-43e719fa-ff69-4964-bc31-d2528becc332","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"c5968f1a-b7af-452f-bf63-1bacb67aef0f","timestampMs":1713949152481,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.274147768Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.062778ms 09:00:51 policy-pap | [2024-04-24T08:59:12.668+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 09:00:51 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 09:00:51 kafka | [2024-04-24 08:58:52,450] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.27857237Z level=info msg="Executing migration" id="create library_element_connection table v1" 09:00:51 policy-pap | [2024-04-24T08:59:12.668+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.279283691Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=710.751µs 09:00:51 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"c5968f1a-b7af-452f-bf63-1bacb67aef0f","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"fe7846fa-1b6b-47d0-a2a9-3907eb9b0f7a","timestampMs":1713949152576,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:00:51 kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.28589238Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 09:00:51 policy-db-migrator | 09:00:51 policy-pap | [2024-04-24T08:59:12.669+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id c5968f1a-b7af-452f-bf63-1bacb67aef0f 09:00:51 kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.28717277Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.28084ms 09:00:51 policy-db-migrator | 09:00:51 policy-pap | [2024-04-24T08:59:12.671+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:00:51 kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.290719589Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 09:00:51 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 09:00:51 policy-pap | {"source":"pap-43e719fa-ff69-4964-bc31-d2528becc332","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"e1bfc2a1-b68d-4b0d-960e-f7897689b4f6","timestampMs":1713949152597,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:00:51 kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.291850407Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.131838ms 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:59:12.671+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 09:00:51 kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.329264778Z level=info msg="Executing migration" id="increase max description length to 2048" 09:00:51 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 09:00:51 policy-pap | [2024-04-24T08:59:12.678+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:00:51 kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.329339549Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=81.221µs 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | {"source":"pap-43e719fa-ff69-4964-bc31-d2528becc332","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"e1bfc2a1-b68d-4b0d-960e-f7897689b4f6","timestampMs":1713949152597,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:00:51 kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.334557364Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 09:00:51 policy-db-migrator | 09:00:51 policy-pap | [2024-04-24T08:59:12.678+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.334666146Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=98.352µs 09:00:51 kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 09:00:51 policy-db-migrator | -------------- 09:00:51 policy-pap | [2024-04-24T08:59:12.683+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:00:51 kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 09:00:51 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"e1bfc2a1-b68d-4b0d-960e-f7897689b4f6","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"cd1194fc-c9f5-401f-9ec0-7e330c6971e2","timestampMs":1713949152670,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.338054331Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.338373207Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=319.916µs 09:00:51 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 09:00:51 kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:12.684+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id e1bfc2a1-b68d-4b0d-960e-f7897689b4f6 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.341943465Z level=info msg="Executing migration" id="create data_keys table" 09:00:51 kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:12.684+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:00:51 policy-db-migrator | -------------- 09:00:51 kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 09:00:51 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"e1bfc2a1-b68d-4b0d-960e-f7897689b4f6","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"cd1194fc-c9f5-401f-9ec0-7e330c6971e2","timestampMs":1713949152670,"name":"apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.34290186Z level=info msg="Migration successfully executed" id="create data_keys table" duration=958.665µs 09:00:51 kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:12.685+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate stopping 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.346967297Z level=info msg="Executing migration" id="create secrets table" 09:00:51 kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:12.685+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate stopping enqueue 09:00:51 policy-db-migrator | TRUNCATE TABLE sequence 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.348084445Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.123618ms 09:00:51 kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:12.685+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate stopping timer 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.353374341Z level=info msg="Executing migration" id="rename data_keys name column to id" 09:00:51 kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:12.685+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=e1bfc2a1-b68d-4b0d-960e-f7897689b4f6, expireMs=1713949182659] 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.388039128Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=34.658417ms 09:00:51 kafka | [2024-04-24 08:58:52,452] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:12.685+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate stopping listener 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.391100838Z level=info msg="Executing migration" id="add name column into data_keys" 09:00:51 kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:12.685+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate stopped 09:00:51 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.39922428Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=8.122992ms 09:00:51 kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:12.689+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 PdpUpdate successful 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.406286726Z level=info msg="Executing migration" id="copy data_keys id column values into name" 09:00:51 kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:12.689+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-f41e458d-42a5-4aba-9a0c-ce74df53e6f4 has no more requests 09:00:51 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.406394528Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=108.442µs 09:00:51 kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:20.226+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.411245337Z level=info msg="Executing migration" id="rename data_keys name column to label" 09:00:51 kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:20.300+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.443198398Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=31.948061ms 09:00:51 kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:20.308+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.467943593Z level=info msg="Executing migration" id="rename data_keys id column back to name" 09:00:51 kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:20.313+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 09:00:51 policy-db-migrator | DROP TABLE pdpstatistics 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.497063398Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=29.123095ms 09:00:51 kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:20.741+00:00|INFO|SessionData|http-nio-6969-exec-7] unknown group testGroup 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.501925767Z level=info msg="Executing migration" id="create kv_store table v1" 09:00:51 kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:21.283+00:00|INFO|SessionData|http-nio-6969-exec-7] create cached group testGroup 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.502806891Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=881.084µs 09:00:51 kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:21.284+00:00|INFO|SessionData|http-nio-6969-exec-7] creating DB group testGroup 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.50577306Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 09:00:51 kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:21.892+00:00|INFO|SessionData|http-nio-6969-exec-9] cache group testGroup 09:00:51 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.506849877Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.076517ms 09:00:51 kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:22.117+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering a deploy for policy onap.restart.tca 1.0.0 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.514760007Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 09:00:51 kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:22.207+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 09:00:51 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.515008241Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=248.634µs 09:00:51 kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:22.207+00:00|INFO|SessionData|http-nio-6969-exec-9] update cached group testGroup 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.519192799Z level=info msg="Executing migration" id="create permission table" 09:00:51 kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:22.208+00:00|INFO|SessionData|http-nio-6969-exec-9] updating DB group testGroup 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.520052063Z level=info msg="Migration successfully executed" id="create permission table" duration=860.474µs 09:00:51 kafka | [2024-04-24 08:58:52,453] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 09:00:51 policy-pap | [2024-04-24T08:59:22.223+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-04-24T08:59:22Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-04-24T08:59:22Z, user=policyadmin)] 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.533434152Z level=info msg="Executing migration" id="add unique index permission.role_id" 09:00:51 kafka | [2024-04-24 08:58:52,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-pap | [2024-04-24T08:59:22.972+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group testGroup 09:00:51 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.534733063Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.303702ms 09:00:51 kafka | [2024-04-24 08:58:52,464] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-pap | [2024-04-24T08:59:22.974+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.537868115Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 09:00:51 kafka | [2024-04-24 08:58:52,465] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-pap | [2024-04-24T08:59:22.974+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.restart.tca 1.0.0 09:00:51 policy-db-migrator | DROP TABLE statistics_sequence 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.539081404Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.213059ms 09:00:51 kafka | [2024-04-24 08:58:52,465] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-pap | [2024-04-24T08:59:22.974+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group testGroup 09:00:51 policy-db-migrator | -------------- 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.544493612Z level=info msg="Executing migration" id="create role table" 09:00:51 kafka | [2024-04-24 08:58:52,465] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-pap | [2024-04-24T08:59:22.975+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group testGroup 09:00:51 policy-db-migrator | 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.545461458Z level=info msg="Migration successfully executed" id="create role table" duration=968.416µs 09:00:51 kafka | [2024-04-24 08:58:52,465] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-pap | [2024-04-24T08:59:22.988+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-24T08:59:22Z, user=policyadmin)] 09:00:51 policy-db-migrator | policyadmin: OK: upgrade (1300) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.55110483Z level=info msg="Executing migration" id="add column display_name" 09:00:51 kafka | [2024-04-24 08:58:52,465] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-pap | [2024-04-24T08:59:23.341+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup 09:00:51 policy-db-migrator | name version 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.558509281Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.404201ms 09:00:51 kafka | [2024-04-24 08:58:52,465] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-pap | [2024-04-24T08:59:23.341+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 09:00:51 policy-db-migrator | policyadmin 1300 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.561467399Z level=info msg="Executing migration" id="add column group_name" 09:00:51 kafka | [2024-04-24 08:58:52,465] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-pap | [2024-04-24T08:59:23.341+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 09:00:51 policy-db-migrator | ID script operation from_version to_version tag success atTime 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.568897961Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.429902ms 09:00:51 kafka | [2024-04-24 08:58:52,465] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-pap | [2024-04-24T08:59:23.341+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 09:00:51 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.572337087Z level=info msg="Executing migration" id="add index role.org_id" 09:00:51 kafka | [2024-04-24 08:58:52,465] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-pap | [2024-04-24T08:59:23.341+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 09:00:51 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.573402094Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.064097ms 09:00:51 kafka | [2024-04-24 08:58:52,465] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-pap | [2024-04-24T08:59:23.341+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 09:00:51 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.578396336Z level=info msg="Executing migration" id="add unique index role_org_id_name" 09:00:51 kafka | [2024-04-24 08:58:52,465] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-pap | [2024-04-24T08:59:23.351+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-24T08:59:23Z, user=policyadmin)] 09:00:51 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.579464233Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.066757ms 09:00:51 policy-pap | [2024-04-24T08:59:42.499+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=77293ae2-da7e-415d-9361-5e79c680736b, expireMs=1713949182498] 09:00:51 kafka | [2024-04-24 08:58:52,465] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.583659472Z level=info msg="Executing migration" id="add index role_org_id_uid" 09:00:51 policy-pap | [2024-04-24T08:59:42.563+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=c5968f1a-b7af-452f-bf63-1bacb67aef0f, expireMs=1713949182563] 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.585299268Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.640376ms 09:00:51 policy-pap | [2024-04-24T08:59:43.926+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.590277Z level=info msg="Executing migration" id="create team role table" 09:00:51 policy-pap | [2024-04-24T08:59:43.928+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.591113414Z level=info msg="Migration successfully executed" id="create team role table" duration=841.345µs 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.594603931Z level=info msg="Executing migration" id="add index team_role.org_id" 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.595729229Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.124988ms 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.599049323Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.600172552Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.122799ms 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.604427751Z level=info msg="Executing migration" id="add index team_role.team_id" 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.605482978Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.055027ms 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.61290944Z level=info msg="Executing migration" id="create user role table" 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.613754453Z level=info msg="Migration successfully executed" id="create user role table" duration=847.253µs 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.618086114Z level=info msg="Executing migration" id="add index user_role.org_id" 09:00:51 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.619124401Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.038337ms 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.623944989Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.625040998Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.096099ms 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.632179904Z level=info msg="Executing migration" id="add index user_role.user_id" 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.633999604Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.81881ms 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.637906317Z level=info msg="Executing migration" id="create builtin role table" 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:20 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.639411983Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.505196ms 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.643939516Z level=info msg="Executing migration" id="add index builtin_role.role_id" 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.645074335Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.134759ms 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.648414399Z level=info msg="Executing migration" id="add index builtin_role.name" 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.649906464Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.490475ms 09:00:51 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.653579274Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 09:00:51 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.661077247Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=7.499413ms 09:00:51 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.666139059Z level=info msg="Executing migration" id="add index builtin_role.org_id" 09:00:51 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.667220306Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.080537ms 09:00:51 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.670852155Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 09:00:51 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.67173645Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=884.435µs 09:00:51 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.67479703Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 09:00:51 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.675640914Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=844.214µs 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.679892513Z level=info msg="Executing migration" id="add unique index role.uid" 09:00:51 kafka | [2024-04-24 08:58:52,466] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.681015792Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.125909ms 09:00:51 kafka | [2024-04-24 08:58:52,467] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.745126679Z level=info msg="Executing migration" id="create seed assignment table" 09:00:51 kafka | [2024-04-24 08:58:52,467] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.746397029Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.26978ms 09:00:51 kafka | [2024-04-24 08:58:52,467] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.751392621Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 09:00:51 kafka | [2024-04-24 08:58:52,467] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.752621391Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.22892ms 09:00:51 kafka | [2024-04-24 08:58:52,467] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.758241502Z level=info msg="Executing migration" id="add column hidden to role table" 09:00:51 kafka | [2024-04-24 08:58:52,467] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.766461847Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.217455ms 09:00:51 kafka | [2024-04-24 08:58:52,467] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.771380597Z level=info msg="Executing migration" id="permission kind migration" 09:00:51 kafka | [2024-04-24 08:58:52,467] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:21 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.777031349Z level=info msg="Migration successfully executed" id="permission kind migration" duration=5.650952ms 09:00:51 kafka | [2024-04-24 08:58:52,467] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.780369114Z level=info msg="Executing migration" id="permission attribute migration" 09:00:51 kafka | [2024-04-24 08:58:52,467] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 kafka | [2024-04-24 08:58:52,467] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.786341071Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.970817ms 09:00:51 kafka | [2024-04-24 08:58:52,467] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.790800025Z level=info msg="Executing migration" id="permission identifier migration" 09:00:51 kafka | [2024-04-24 08:58:52,467] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.798590842Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=7.788827ms 09:00:51 kafka | [2024-04-24 08:58:52,467] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.801957787Z level=info msg="Executing migration" id="add permission identifier index" 09:00:51 kafka | [2024-04-24 08:58:52,467] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.803166026Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.207869ms 09:00:51 kafka | [2024-04-24 08:58:52,467] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.806670023Z level=info msg="Executing migration" id="add permission action scope role_id index" 09:00:51 kafka | [2024-04-24 08:58:52,467] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.807934044Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.263601ms 09:00:51 kafka | [2024-04-24 08:58:52,467] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.812249114Z level=info msg="Executing migration" id="remove permission role_id action scope index" 09:00:51 kafka | [2024-04-24 08:58:52,468] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.813379153Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.130139ms 09:00:51 kafka | [2024-04-24 08:58:52,468] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.816680027Z level=info msg="Executing migration" id="create query_history table v1" 09:00:51 kafka | [2024-04-24 08:58:52,468] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.817705863Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.025266ms 09:00:51 kafka | [2024-04-24 08:58:52,468] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.858983848Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 09:00:51 kafka | [2024-04-24 08:58:52,468] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.860029625Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.049738ms 09:00:51 kafka | [2024-04-24 08:58:52,468] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.864414566Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 09:00:51 kafka | [2024-04-24 08:58:52,468] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.864464337Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=52.551µs 09:00:51 kafka | [2024-04-24 08:58:52,468] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.868000805Z level=info msg="Executing migration" id="rbac disabled migrator" 09:00:51 kafka | [2024-04-24 08:58:52,468] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.868068146Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=67.621µs 09:00:51 kafka | [2024-04-24 08:58:52,468] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.890391531Z level=info msg="Executing migration" id="teams permissions migration" 09:00:51 kafka | [2024-04-24 08:58:52,468] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.890727687Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=336.386µs 09:00:51 kafka | [2024-04-24 08:58:52,468] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.894940205Z level=info msg="Executing migration" id="dashboard permissions" 09:00:51 kafka | [2024-04-24 08:58:52,468] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.895332431Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=392.546µs 09:00:51 kafka | [2024-04-24 08:58:52,468] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:22 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.898113047Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 09:00:51 kafka | [2024-04-24 08:58:52,469] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.898597565Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=484.568µs 09:00:51 kafka | [2024-04-24 08:58:52,470] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.90137063Z level=info msg="Executing migration" id="drop managed folder create actions" 09:00:51 kafka | [2024-04-24 08:58:52,470] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.901508982Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=138.272µs 09:00:51 kafka | [2024-04-24 08:58:52,470] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.908422835Z level=info msg="Executing migration" id="alerting notification permissions" 09:00:51 kafka | [2024-04-24 08:58:52,470] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.908782161Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=361.156µs 09:00:51 kafka | [2024-04-24 08:58:52,470] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.911937423Z level=info msg="Executing migration" id="create query_history_star table v1" 09:00:51 kafka | [2024-04-24 08:58:52,470] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.912501052Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=564.569µs 09:00:51 kafka | [2024-04-24 08:58:52,470] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.915575192Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 09:00:51 kafka | [2024-04-24 08:58:52,470] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.916322264Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=746.782µs 09:00:51 kafka | [2024-04-24 08:58:52,470] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.919996074Z level=info msg="Executing migration" id="add column org_id in query_history_star" 09:00:51 kafka | [2024-04-24 08:58:52,470] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.925948011Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=5.950777ms 09:00:51 kafka | [2024-04-24 08:58:52,470] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.930858682Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 09:00:51 kafka | [2024-04-24 08:58:52,470] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.930948833Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=64.831µs 09:00:51 kafka | [2024-04-24 08:58:52,470] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.933590746Z level=info msg="Executing migration" id="create correlation table v1" 09:00:51 kafka | [2024-04-24 08:58:52,471] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.934348879Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=757.813µs 09:00:51 kafka | [2024-04-24 08:58:52,471] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:23 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.941851201Z level=info msg="Executing migration" id="add index correlations.uid" 09:00:51 kafka | [2024-04-24 08:58:52,471] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:24 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.942638484Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=787.333µs 09:00:51 kafka | [2024-04-24 08:58:52,471] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:24 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.947483573Z level=info msg="Executing migration" id="add index correlations.source_uid" 09:00:51 kafka | [2024-04-24 08:58:52,471] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:24 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.948274526Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=790.673µs 09:00:51 kafka | [2024-04-24 08:58:52,471] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:24 09:00:51 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:24 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.951178873Z level=info msg="Executing migration" id="add correlation config column" 09:00:51 kafka | [2024-04-24 08:58:52,471] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:00:51 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:24 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.96264243Z level=info msg="Migration successfully executed" id="add correlation config column" duration=11.464947ms 09:00:51 kafka | [2024-04-24 08:58:52,471] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:24 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.965521197Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 09:00:51 kafka | [2024-04-24 08:58:52,471] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 6 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:24 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.96629083Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=769.633µs 09:00:51 kafka | [2024-04-24 08:58:52,473] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:24 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.971946583Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 09:00:51 kafka | [2024-04-24 08:58:52,473] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:24 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.97306557Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.118377ms 09:00:51 kafka | [2024-04-24 08:58:52,473] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:24 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:15.979152Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 09:00:51 kafka | [2024-04-24 08:58:52,473] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2404240858200800u 1 2024-04-24 08:58:24 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.002209727Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=23.059497ms 09:00:51 kafka | [2024-04-24 08:58:52,473] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:24 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.007832139Z level=info msg="Executing migration" id="create correlation v2" 09:00:51 kafka | [2024-04-24 08:58:52,473] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:24 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.008622081Z level=info msg="Migration successfully executed" id="create correlation v2" duration=789.842µs 09:00:51 kafka | [2024-04-24 08:58:52,473] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:24 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.011849314Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 09:00:51 kafka | [2024-04-24 08:58:52,473] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:24 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.013006093Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.156079ms 09:00:51 kafka | [2024-04-24 08:58:52,473] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:24 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.015977401Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 09:00:51 kafka | [2024-04-24 08:58:52,473] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:25 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.017222271Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.24425ms 09:00:51 kafka | [2024-04-24 08:58:52,473] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:25 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.022549139Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 09:00:51 kafka | [2024-04-24 08:58:52,474] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:25 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.023738689Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.18932ms 09:00:51 kafka | [2024-04-24 08:58:52,474] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:25 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.027804704Z level=info msg="Executing migration" id="copy correlation v1 to v2" 09:00:51 kafka | [2024-04-24 08:58:52,474] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:25 09:00:51 kafka | [2024-04-24 08:58:52,474] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.028201531Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=397.267µs 09:00:51 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:25 09:00:51 kafka | [2024-04-24 08:58:52,474] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.031467545Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 09:00:51 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:25 09:00:51 kafka | [2024-04-24 08:58:52,474] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.032618823Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.151318ms 09:00:51 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2404240858200900u 1 2024-04-24 08:58:25 09:00:51 kafka | [2024-04-24 08:58:52,474] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.037715047Z level=info msg="Executing migration" id="add provisioning column" 09:00:51 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2404240858201000u 1 2024-04-24 08:58:25 09:00:51 kafka | [2024-04-24 08:58:52,474] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.046865106Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.147809ms 09:00:51 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2404240858201000u 1 2024-04-24 08:58:25 09:00:51 kafka | [2024-04-24 08:58:52,474] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.049886755Z level=info msg="Executing migration" id="create entity_events table" 09:00:51 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2404240858201000u 1 2024-04-24 08:58:25 09:00:51 kafka | [2024-04-24 08:58:52,474] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.052058311Z level=info msg="Migration successfully executed" id="create entity_events table" duration=2.170686ms 09:00:51 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2404240858201000u 1 2024-04-24 08:58:25 09:00:51 kafka | [2024-04-24 08:58:52,474] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.05513609Z level=info msg="Executing migration" id="create dashboard public config v1" 09:00:51 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2404240858201000u 1 2024-04-24 08:58:25 09:00:51 kafka | [2024-04-24 08:58:52,475] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.056798398Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.661838ms 09:00:51 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2404240858201000u 1 2024-04-24 08:58:25 09:00:51 kafka | [2024-04-24 08:58:52,475] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.061140059Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 09:00:51 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2404240858201000u 1 2024-04-24 08:58:25 09:00:51 kafka | [2024-04-24 08:58:52,475] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.061880171Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 09:00:51 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2404240858201000u 1 2024-04-24 08:58:25 09:00:51 kafka | [2024-04-24 08:58:52,475] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.065608151Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 09:00:51 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2404240858201000u 1 2024-04-24 08:58:25 09:00:51 kafka | [2024-04-24 08:58:52,475] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.06611045Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 09:00:51 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2404240858201100u 1 2024-04-24 08:58:25 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.069668668Z level=info msg="Executing migration" id="Drop old dashboard public config table" 09:00:51 kafka | [2024-04-24 08:58:52,475] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2404240858201200u 1 2024-04-24 08:58:25 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.070487501Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=818.353µs 09:00:51 kafka | [2024-04-24 08:58:52,475] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2404240858201200u 1 2024-04-24 08:58:25 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.075042636Z level=info msg="Executing migration" id="recreate dashboard public config v1" 09:00:51 kafka | [2024-04-24 08:58:52,475] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2404240858201200u 1 2024-04-24 08:58:25 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.076595051Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.549125ms 09:00:51 kafka | [2024-04-24 08:58:52,475] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2404240858201200u 1 2024-04-24 08:58:26 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.08023191Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 09:00:51 kafka | [2024-04-24 08:58:52,475] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2404240858201300u 1 2024-04-24 08:58:26 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.08205122Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.81885ms 09:00:51 kafka | [2024-04-24 08:58:52,475] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2404240858201300u 1 2024-04-24 08:58:26 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.085348115Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 09:00:51 kafka | [2024-04-24 08:58:52,476] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2404240858201300u 1 2024-04-24 08:58:26 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.087274425Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.92583ms 09:00:51 kafka | [2024-04-24 08:58:52,476] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 policy-db-migrator | policyadmin: OK @ 1300 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.091402923Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 09:00:51 kafka | [2024-04-24 08:58:52,476] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.092491621Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.088598ms 09:00:51 kafka | [2024-04-24 08:58:52,476] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.130686934Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 09:00:51 kafka | [2024-04-24 08:58:52,476] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.132330342Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.643918ms 09:00:51 kafka | [2024-04-24 08:58:52,476] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.136654242Z level=info msg="Executing migration" id="Drop public config table" 09:00:51 kafka | [2024-04-24 08:58:52,476] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.13781966Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.165018ms 09:00:51 kafka | [2024-04-24 08:58:52,476] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.14138117Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 09:00:51 kafka | [2024-04-24 08:58:52,476] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.142564638Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.183178ms 09:00:51 kafka | [2024-04-24 08:58:52,476] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.145764921Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 09:00:51 kafka | [2024-04-24 08:58:52,476] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.14693534Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.170039ms 09:00:51 kafka | [2024-04-24 08:58:52,477] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 7 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.150900844Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 09:00:51 kafka | [2024-04-24 08:58:52,477] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.153136901Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.235737ms 09:00:51 kafka | [2024-04-24 08:58:52,477] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 kafka | [2024-04-24 08:58:52,477] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.156765861Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 09:00:51 kafka | [2024-04-24 08:58:52,477] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.158538109Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.772478ms 09:00:51 kafka | [2024-04-24 08:58:52,478] INFO [Broker id=1] Finished LeaderAndIsr request in 1124ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.162658407Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.187421761Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=24.763614ms 09:00:51 kafka | [2024-04-24 08:58:52,481] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=3d7pexomSuav55xzl5U12w, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=UfYjnzzkRPeYang4gRgPIg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.198669784Z level=info msg="Executing migration" id="add annotations_enabled column" 09:00:51 kafka | [2024-04-24 08:58:52,487] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.210508608Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=11.838554ms 09:00:51 kafka | [2024-04-24 08:58:52,487] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.213978784Z level=info msg="Executing migration" id="add time_selection_enabled column" 09:00:51 kafka | [2024-04-24 08:58:52,487] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.220027763Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.048729ms 09:00:51 kafka | [2024-04-24 08:58:52,487] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.223088453Z level=info msg="Executing migration" id="delete orphaned public dashboards" 09:00:51 kafka | [2024-04-24 08:58:52,487] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.223340248Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=251.815µs 09:00:51 kafka | [2024-04-24 08:58:52,487] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.227741549Z level=info msg="Executing migration" id="add share column" 09:00:51 kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.240865554Z level=info msg="Migration successfully executed" id="add share column" duration=13.121715ms 09:00:51 kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.24430429Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 09:00:51 kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.244471433Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=165.813µs 09:00:51 kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.247784767Z level=info msg="Executing migration" id="create file table" 09:00:51 kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.248486449Z level=info msg="Migration successfully executed" id="create file table" duration=702.411µs 09:00:51 kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.253885676Z level=info msg="Executing migration" id="file table idx: path natural pk" 09:00:51 kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.255717566Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.83034ms 09:00:51 kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.259336505Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.260510265Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.17335ms 09:00:51 kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.26390722Z level=info msg="Executing migration" id="create file_meta table" 09:00:51 kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.264740074Z level=info msg="Migration successfully executed" id="create file_meta table" duration=832.514µs 09:00:51 kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.268657437Z level=info msg="Executing migration" id="file table idx: path key" 09:00:51 kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.269901678Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.244521ms 09:00:51 kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.275662431Z level=info msg="Executing migration" id="set path collation in file table" 09:00:51 kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.275727022Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=67.011µs 09:00:51 kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.279508235Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 09:00:51 kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.279645517Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=137.752µs 09:00:51 kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.282580715Z level=info msg="Executing migration" id="managed permissions migration" 09:00:51 kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.283432938Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=852.133µs 09:00:51 kafka | [2024-04-24 08:58:52,488] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.287907162Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 09:00:51 kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.288168066Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=258.864µs 09:00:51 kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.291566481Z level=info msg="Executing migration" id="RBAC action name migrator" 09:00:51 kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.293024755Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.458134ms 09:00:51 kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.296733985Z level=info msg="Executing migration" id="Add UID column to playlist" 09:00:51 kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.306366553Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.631148ms 09:00:51 kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.309772189Z level=info msg="Executing migration" id="Update uid column values in playlist" 09:00:51 kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.309972152Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=199.603µs 09:00:51 kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.314694049Z level=info msg="Executing migration" id="Add index for uid in playlist" 09:00:51 kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.31594296Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.248761ms 09:00:51 kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.319709541Z level=info msg="Executing migration" id="update group index for alert rules" 09:00:51 kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.320150818Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=440.267µs 09:00:51 kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.323643605Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 09:00:51 kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.324030041Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=387.016µs 09:00:51 kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.328302972Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 09:00:51 kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.329122045Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=817.943µs 09:00:51 kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.332588842Z level=info msg="Executing migration" id="add action column to seed_assignment" 09:00:51 kafka | [2024-04-24 08:58:52,489] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.342455623Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.865611ms 09:00:51 kafka | [2024-04-24 08:58:52,490] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.34598816Z level=info msg="Executing migration" id="add scope column to seed_assignment" 09:00:51 kafka | [2024-04-24 08:58:52,490] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.356140336Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=10.149146ms 09:00:51 kafka | [2024-04-24 08:58:52,490] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.359830487Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 09:00:51 kafka | [2024-04-24 08:58:52,490] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 kafka | [2024-04-24 08:58:52,490] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 kafka | [2024-04-24 08:58:52,490] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.36065918Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=825.873µs 09:00:51 kafka | [2024-04-24 08:58:52,490] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.365034831Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 09:00:51 kafka | [2024-04-24 08:58:52,490] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.435061445Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=70.023504ms 09:00:51 kafka | [2024-04-24 08:58:52,490] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.597497397Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 09:00:51 kafka | [2024-04-24 08:58:52,490] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.599538731Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=2.044623ms 09:00:51 kafka | [2024-04-24 08:58:52,491] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.605720712Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 09:00:51 kafka | [2024-04-24 08:58:52,558] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-b2dc9f1d-b06d-4078-927e-cc7dc2d2688c and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.607183175Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.462113ms 09:00:51 kafka | [2024-04-24 08:58:52,577] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-b2dc9f1d-b06d-4078-927e-cc7dc2d2688c with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.611840382Z level=info msg="Executing migration" id="add primary key to seed_assigment" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.637112304Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=25.272412ms 09:00:51 kafka | [2024-04-24 08:58:52,597] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group c2598a93-7b5f-4e4e-b23a-b864ffd9a18a in Empty state. Created a new member id consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3-2e3abf31-158b-4904-8a97-f271619f738d and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.641392414Z level=info msg="Executing migration" id="add origin column to seed_assignment" 09:00:51 kafka | [2024-04-24 08:58:52,602] INFO [GroupCoordinator 1]: Preparing to rebalance group c2598a93-7b5f-4e4e-b23a-b864ffd9a18a in state PreparingRebalance with old generation 0 (__consumer_offsets-10) (reason: Adding new member consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3-2e3abf31-158b-4904-8a97-f271619f738d with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.650751387Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=9.358653ms 09:00:51 kafka | [2024-04-24 08:58:52,830] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 6c14929a-34c8-48a0-adf2-d542a07b4ce8 in Empty state. Created a new member id consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2-0953ac9a-4503-441d-8d7e-d642725f8ea2 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.656310907Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 09:00:51 kafka | [2024-04-24 08:58:52,834] INFO [GroupCoordinator 1]: Preparing to rebalance group 6c14929a-34c8-48a0-adf2-d542a07b4ce8 in state PreparingRebalance with old generation 0 (__consumer_offsets-10) (reason: Adding new member consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2-0953ac9a-4503-441d-8d7e-d642725f8ea2 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.656555001Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=244.714µs 09:00:51 kafka | [2024-04-24 08:58:55,587] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.659610542Z level=info msg="Executing migration" id="prevent seeding OnCall access" 09:00:51 kafka | [2024-04-24 08:58:55,603] INFO [GroupCoordinator 1]: Stabilized group c2598a93-7b5f-4e4e-b23a-b864ffd9a18a generation 1 (__consumer_offsets-10) with 1 members (kafka.coordinator.group.GroupCoordinator) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.659755384Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=146.272µs 09:00:51 kafka | [2024-04-24 08:58:55,609] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-b2dc9f1d-b06d-4078-927e-cc7dc2d2688c for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.662503999Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 09:00:51 kafka | [2024-04-24 08:58:55,610] INFO [GroupCoordinator 1]: Assignment received from leader consumer-c2598a93-7b5f-4e4e-b23a-b864ffd9a18a-3-2e3abf31-158b-4904-8a97-f271619f738d for group c2598a93-7b5f-4e4e-b23a-b864ffd9a18a for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.662873934Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=370.625µs 09:00:51 kafka | [2024-04-24 08:58:55,835] INFO [GroupCoordinator 1]: Stabilized group 6c14929a-34c8-48a0-adf2-d542a07b4ce8 generation 1 (__consumer_offsets-10) with 1 members (kafka.coordinator.group.GroupCoordinator) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.666455604Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 09:00:51 kafka | [2024-04-24 08:58:55,851] INFO [GroupCoordinator 1]: Assignment received from leader consumer-6c14929a-34c8-48a0-adf2-d542a07b4ce8-2-0953ac9a-4503-441d-8d7e-d642725f8ea2 for group 6c14929a-34c8-48a0-adf2-d542a07b4ce8 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.666818669Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=363.505µs 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.67117534Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.671430185Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=254.595µs 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.674077087Z level=info msg="Executing migration" id="create folder table" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.675032383Z level=info msg="Migration successfully executed" id="create folder table" duration=955.186µs 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.678566661Z level=info msg="Executing migration" id="Add index for parent_uid" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.680461862Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.894901ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.685091457Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.686368008Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.276161ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.68951716Z level=info msg="Executing migration" id="Update folder title length" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.689543251Z level=info msg="Migration successfully executed" id="Update folder title length" duration=26.781µs 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.692807343Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.694067064Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.259281ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.698265353Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.699486093Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.22121ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.703042011Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.704631396Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.588165ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.708351847Z level=info msg="Executing migration" id="Sync dashboard and folder table" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.709059078Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=707.721µs 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.713483951Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.713778556Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=292.825µs 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.718943791Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.720081019Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.141258ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.723936932Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.724890117Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=953.305µs 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.730348257Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.73120214Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=853.853µs 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.821090228Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.822075505Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=985.167µs 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.82549455Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.826373215Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=878.845µs 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.831355926Z level=info msg="Executing migration" id="create anon_device table" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.832129378Z level=info msg="Migration successfully executed" id="create anon_device table" duration=773.112µs 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.835504604Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.836492659Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=988.035µs 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.840542606Z level=info msg="Executing migration" id="add index anon_device.updated_at" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.841513592Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=970.826µs 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.844785235Z level=info msg="Executing migration" id="create signing_key table" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.845545497Z level=info msg="Migration successfully executed" id="create signing_key table" duration=760.182µs 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.849675845Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.850613341Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=937.526µs 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.859540766Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.861516508Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.975552ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.865928061Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.866395448Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=468.077µs 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.924124111Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.952121488Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=27.998037ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.983587792Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.984885053Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.298771ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.990509124Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.992525697Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=2.016373ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:16.997476198Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:17.00061538Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=3.138622ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:17.005456279Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:17.00674798Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.292391ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:17.009707728Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:17.011074901Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.366753ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:17.014477406Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:17.015655905Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.178119ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:17.019865064Z level=info msg="Executing migration" id="create sso_setting table" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:17.021008113Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.142469ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:17.028324312Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:17.029678534Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.356482ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:17.037018384Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:17.037559123Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=541.959µs 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:17.043320447Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:17.043419169Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=99.792µs 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:17.049126102Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:17.058272311Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=9.144399ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:17.0625113Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:17.071589209Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.077548ms 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:17.077141509Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:17.077460285Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=318.386µs 09:00:51 grafana | logger=migrator t=2024-04-24T08:58:17.081759375Z level=info msg="migrations completed" performed=548 skipped=0 duration=5.448562772s 09:00:51 grafana | logger=sqlstore t=2024-04-24T08:58:17.094591464Z level=info msg="Created default admin" user=admin 09:00:51 grafana | logger=sqlstore t=2024-04-24T08:58:17.094851169Z level=info msg="Created default organization" 09:00:51 grafana | logger=secrets t=2024-04-24T08:58:17.099828Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 09:00:51 grafana | logger=plugin.store t=2024-04-24T08:58:17.137222701Z level=info msg="Loading plugins..." 09:00:51 grafana | logger=local.finder t=2024-04-24T08:58:17.182728464Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 09:00:51 grafana | logger=plugin.store t=2024-04-24T08:58:17.182766324Z level=info msg="Plugins loaded" count=55 duration=45.544134ms 09:00:51 grafana | logger=query_data t=2024-04-24T08:58:17.185865125Z level=info msg="Query Service initialization" 09:00:51 grafana | logger=live.push_http t=2024-04-24T08:58:17.189593076Z level=info msg="Live Push Gateway initialization" 09:00:51 grafana | logger=ngalert.migration t=2024-04-24T08:58:17.193653292Z level=info msg=Starting 09:00:51 grafana | logger=ngalert.migration t=2024-04-24T08:58:17.193986137Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 09:00:51 grafana | logger=ngalert.migration orgID=1 t=2024-04-24T08:58:17.194286903Z level=info msg="Migrating alerts for organisation" 09:00:51 grafana | logger=ngalert.migration orgID=1 t=2024-04-24T08:58:17.19474461Z level=info msg="Alerts found to migrate" alerts=0 09:00:51 grafana | logger=ngalert.migration t=2024-04-24T08:58:17.19600324Z level=info msg="Completed alerting migration" 09:00:51 grafana | logger=ngalert.state.manager t=2024-04-24T08:58:17.228622953Z level=info msg="Running in alternative execution of Error/NoData mode" 09:00:51 grafana | logger=infra.usagestats.collector t=2024-04-24T08:58:17.231422809Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 09:00:51 grafana | logger=provisioning.datasources t=2024-04-24T08:58:17.234314686Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 09:00:51 grafana | logger=provisioning.alerting t=2024-04-24T08:58:17.264772023Z level=info msg="starting to provision alerting" 09:00:51 grafana | logger=provisioning.alerting t=2024-04-24T08:58:17.264800874Z level=info msg="finished to provision alerting" 09:00:51 grafana | logger=grafanaStorageLogger t=2024-04-24T08:58:17.264997327Z level=info msg="Storage starting" 09:00:51 grafana | logger=ngalert.state.manager t=2024-04-24T08:58:17.266002303Z level=info msg="Warming state cache for startup" 09:00:51 grafana | logger=ngalert.multiorg.alertmanager t=2024-04-24T08:58:17.266441801Z level=info msg="Starting MultiOrg Alertmanager" 09:00:51 grafana | logger=http.server t=2024-04-24T08:58:17.269357878Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 09:00:51 grafana | logger=sqlstore.transactions t=2024-04-24T08:58:17.276670477Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 09:00:51 grafana | logger=grafana.update.checker t=2024-04-24T08:58:17.494665388Z level=info msg="Update check succeeded" duration=225.504734ms 09:00:51 grafana | logger=plugins.update.checker t=2024-04-24T08:58:17.495990968Z level=info msg="Update check succeeded" duration=226.62562ms 09:00:51 grafana | logger=ngalert.state.manager t=2024-04-24T08:58:17.546103437Z level=info msg="State cache has been initialized" states=0 duration=280.096974ms 09:00:51 grafana | logger=ngalert.scheduler t=2024-04-24T08:58:17.546133078Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 09:00:51 grafana | logger=ticker t=2024-04-24T08:58:17.546188658Z level=info msg=starting first_tick=2024-04-24T08:58:20Z 09:00:51 grafana | logger=grafana-apiserver t=2024-04-24T08:58:17.550853295Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 09:00:51 grafana | logger=grafana-apiserver t=2024-04-24T08:58:17.551400084Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 09:00:51 grafana | logger=provisioning.dashboard t=2024-04-24T08:58:17.657977424Z level=info msg="starting to provision dashboards" 09:00:51 grafana | logger=sqlstore.transactions t=2024-04-24T08:58:17.77161602Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 09:00:51 grafana | logger=provisioning.dashboard t=2024-04-24T08:58:17.95901796Z level=info msg="finished to provision dashboards" 09:00:51 grafana | logger=infra.usagestats t=2024-04-24T08:59:43.27663479Z level=info msg="Usage stats are ready to report" 09:00:51 ++ echo 'Tearing down containers...' 09:00:51 Tearing down containers... 09:00:51 ++ docker-compose down -v --remove-orphans 09:00:52 Stopping policy-apex-pdp ... 09:00:52 Stopping policy-pap ... 09:00:52 Stopping kafka ... 09:00:52 Stopping policy-api ... 09:00:52 Stopping grafana ... 09:00:52 Stopping simulator ... 09:00:52 Stopping mariadb ... 09:00:52 Stopping prometheus ... 09:00:52 Stopping zookeeper ... 09:00:52 Stopping grafana ... done 09:00:53 Stopping prometheus ... done 09:01:02 Stopping policy-apex-pdp ... done 09:01:13 Stopping simulator ... done 09:01:13 Stopping policy-pap ... done 09:01:15 Stopping mariadb ... done 09:01:16 Stopping kafka ... done 09:01:16 Stopping zookeeper ... done 09:01:23 Stopping policy-api ... done 09:01:24 Removing policy-apex-pdp ... 09:01:24 Removing policy-pap ... 09:01:24 Removing kafka ... 09:01:24 Removing policy-api ... 09:01:24 Removing policy-db-migrator ... 09:01:24 Removing grafana ... 09:01:24 Removing simulator ... 09:01:24 Removing mariadb ... 09:01:24 Removing prometheus ... 09:01:24 Removing zookeeper ... 09:01:24 Removing policy-apex-pdp ... done 09:01:24 Removing policy-api ... done 09:01:24 Removing simulator ... done 09:01:24 Removing policy-db-migrator ... done 09:01:24 Removing mariadb ... done 09:01:24 Removing grafana ... done 09:01:24 Removing zookeeper ... done 09:01:24 Removing prometheus ... done 09:01:24 Removing policy-pap ... done 09:01:24 Removing kafka ... done 09:01:24 Removing network compose_default 09:01:24 ++ cd /w/workspace/policy-pap-master-project-csit-pap 09:01:24 + load_set 09:01:24 + _setopts=hxB 09:01:24 ++ echo braceexpand:hashall:interactive-comments:xtrace 09:01:24 ++ tr : ' ' 09:01:24 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:01:24 + set +o braceexpand 09:01:24 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:01:24 + set +o hashall 09:01:24 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:01:24 + set +o interactive-comments 09:01:24 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:01:24 + set +o xtrace 09:01:24 ++ echo hxB 09:01:24 ++ sed 's/./& /g' 09:01:24 + for i in $(echo "$_setopts" | sed 's/./& /g') 09:01:24 + set +h 09:01:24 + for i in $(echo "$_setopts" | sed 's/./& /g') 09:01:24 + set +x 09:01:24 + rsync /w/workspace/policy-pap-master-project-csit-pap/compose/docker_compose.log /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 09:01:24 + [[ -n /tmp/tmp.9nztubu5q5 ]] 09:01:24 + rsync -av /tmp/tmp.9nztubu5q5/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 09:01:24 sending incremental file list 09:01:24 ./ 09:01:24 log.html 09:01:24 output.xml 09:01:24 report.html 09:01:24 testplan.txt 09:01:24 09:01:24 sent 918,617 bytes received 95 bytes 1,837,424.00 bytes/sec 09:01:24 total size is 918,075 speedup is 1.00 09:01:24 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 09:01:24 + exit 1 09:01:24 Build step 'Execute shell' marked build as failure 09:01:24 $ ssh-agent -k 09:01:24 unset SSH_AUTH_SOCK; 09:01:24 unset SSH_AGENT_PID; 09:01:24 echo Agent pid 2087 killed; 09:01:24 [ssh-agent] Stopped. 09:01:24 Robot results publisher started... 09:01:24 INFO: Checking test criticality is deprecated and will be dropped in a future release! 09:01:24 -Parsing output xml: 09:01:25 Done! 09:01:25 WARNING! Could not find file: **/log.html 09:01:25 WARNING! Could not find file: **/report.html 09:01:25 -Copying log files to build dir: 09:01:25 Done! 09:01:25 -Assigning results to build: 09:01:25 Done! 09:01:25 -Checking thresholds: 09:01:25 Done! 09:01:25 Done publishing Robot results. 09:01:25 [PostBuildScript] - [INFO] Executing post build scripts. 09:01:25 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins16316828185803153699.sh 09:01:25 ---> sysstat.sh 09:01:25 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10321740824680298229.sh 09:01:25 ---> package-listing.sh 09:01:25 ++ facter osfamily 09:01:25 ++ tr '[:upper:]' '[:lower:]' 09:01:25 + OS_FAMILY=debian 09:01:25 + workspace=/w/workspace/policy-pap-master-project-csit-pap 09:01:25 + START_PACKAGES=/tmp/packages_start.txt 09:01:25 + END_PACKAGES=/tmp/packages_end.txt 09:01:25 + DIFF_PACKAGES=/tmp/packages_diff.txt 09:01:25 + PACKAGES=/tmp/packages_start.txt 09:01:25 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 09:01:25 + PACKAGES=/tmp/packages_end.txt 09:01:25 + case "${OS_FAMILY}" in 09:01:25 + dpkg -l 09:01:25 + grep '^ii' 09:01:25 + '[' -f /tmp/packages_start.txt ']' 09:01:25 + '[' -f /tmp/packages_end.txt ']' 09:01:25 + diff /tmp/packages_start.txt /tmp/packages_end.txt 09:01:26 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 09:01:26 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 09:01:26 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 09:01:26 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins3435512097399306784.sh 09:01:26 ---> capture-instance-metadata.sh 09:01:26 Setup pyenv: 09:01:26 system 09:01:26 3.8.13 09:01:26 3.9.13 09:01:26 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 09:01:26 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-oy3i from file:/tmp/.os_lf_venv 09:01:27 lf-activate-venv(): INFO: Installing: lftools 09:01:37 lf-activate-venv(): INFO: Adding /tmp/venv-oy3i/bin to PATH 09:01:37 INFO: Running in OpenStack, capturing instance metadata 09:01:37 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4672491028009825115.sh 09:01:37 provisioning config files... 09:01:37 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config4495636280831550086tmp 09:01:37 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 09:01:37 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 09:01:37 [EnvInject] - Injecting environment variables from a build step. 09:01:37 [EnvInject] - Injecting as environment variables the properties content 09:01:37 SERVER_ID=logs 09:01:37 09:01:37 [EnvInject] - Variables injected successfully. 09:01:37 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1678436478455647875.sh 09:01:37 ---> create-netrc.sh 09:01:37 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4837189172700282969.sh 09:01:37 ---> python-tools-install.sh 09:01:37 Setup pyenv: 09:01:37 system 09:01:37 3.8.13 09:01:37 3.9.13 09:01:38 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 09:01:38 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-oy3i from file:/tmp/.os_lf_venv 09:01:39 lf-activate-venv(): INFO: Installing: lftools 09:01:48 lf-activate-venv(): INFO: Adding /tmp/venv-oy3i/bin to PATH 09:01:48 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2886048813112688206.sh 09:01:48 ---> sudo-logs.sh 09:01:48 Archiving 'sudo' log.. 09:01:48 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins111671421306962699.sh 09:01:48 ---> job-cost.sh 09:01:48 Setup pyenv: 09:01:48 system 09:01:48 3.8.13 09:01:48 3.9.13 09:01:48 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 09:01:48 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-oy3i from file:/tmp/.os_lf_venv 09:01:49 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 09:01:54 lf-activate-venv(): INFO: Adding /tmp/venv-oy3i/bin to PATH 09:01:54 INFO: No Stack... 09:01:55 INFO: Retrieving Pricing Info for: v3-standard-8 09:01:57 INFO: Archiving Costs 09:01:57 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins4994298349377037758.sh 09:01:57 ---> logs-deploy.sh 09:01:57 Setup pyenv: 09:01:57 system 09:01:57 3.8.13 09:01:57 3.9.13 09:01:57 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 09:01:57 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-oy3i from file:/tmp/.os_lf_venv 09:01:58 lf-activate-venv(): INFO: Installing: lftools 09:02:07 lf-activate-venv(): INFO: Adding /tmp/venv-oy3i/bin to PATH 09:02:07 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1657 09:02:07 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 09:02:08 Archives upload complete. 09:02:08 INFO: archiving logs to Nexus 09:02:09 ---> uname -a: 09:02:09 Linux prd-ubuntu1804-docker-8c-8g-25485 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 09:02:09 09:02:09 09:02:09 ---> lscpu: 09:02:09 Architecture: x86_64 09:02:09 CPU op-mode(s): 32-bit, 64-bit 09:02:09 Byte Order: Little Endian 09:02:09 CPU(s): 8 09:02:09 On-line CPU(s) list: 0-7 09:02:09 Thread(s) per core: 1 09:02:09 Core(s) per socket: 1 09:02:09 Socket(s): 8 09:02:09 NUMA node(s): 1 09:02:09 Vendor ID: AuthenticAMD 09:02:09 CPU family: 23 09:02:09 Model: 49 09:02:09 Model name: AMD EPYC-Rome Processor 09:02:09 Stepping: 0 09:02:09 CPU MHz: 2799.996 09:02:09 BogoMIPS: 5599.99 09:02:09 Virtualization: AMD-V 09:02:09 Hypervisor vendor: KVM 09:02:09 Virtualization type: full 09:02:09 L1d cache: 32K 09:02:09 L1i cache: 32K 09:02:09 L2 cache: 512K 09:02:09 L3 cache: 16384K 09:02:09 NUMA node0 CPU(s): 0-7 09:02:09 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 09:02:09 09:02:09 09:02:09 ---> nproc: 09:02:09 8 09:02:09 09:02:09 09:02:09 ---> df -h: 09:02:09 Filesystem Size Used Avail Use% Mounted on 09:02:09 udev 16G 0 16G 0% /dev 09:02:09 tmpfs 3.2G 708K 3.2G 1% /run 09:02:09 /dev/vda1 155G 14G 142G 9% / 09:02:09 tmpfs 16G 0 16G 0% /dev/shm 09:02:09 tmpfs 5.0M 0 5.0M 0% /run/lock 09:02:09 tmpfs 16G 0 16G 0% /sys/fs/cgroup 09:02:09 /dev/vda15 105M 4.4M 100M 5% /boot/efi 09:02:09 tmpfs 3.2G 0 3.2G 0% /run/user/1001 09:02:09 09:02:09 09:02:09 ---> free -m: 09:02:09 total used free shared buff/cache available 09:02:09 Mem: 32167 841 25173 0 6152 30869 09:02:09 Swap: 1023 0 1023 09:02:09 09:02:09 09:02:09 ---> ip addr: 09:02:09 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 09:02:09 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 09:02:09 inet 127.0.0.1/8 scope host lo 09:02:09 valid_lft forever preferred_lft forever 09:02:09 inet6 ::1/128 scope host 09:02:09 valid_lft forever preferred_lft forever 09:02:09 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 09:02:09 link/ether fa:16:3e:a2:4a:6c brd ff:ff:ff:ff:ff:ff 09:02:09 inet 10.30.107.191/23 brd 10.30.107.255 scope global dynamic ens3 09:02:09 valid_lft 85921sec preferred_lft 85921sec 09:02:09 inet6 fe80::f816:3eff:fea2:4a6c/64 scope link 09:02:09 valid_lft forever preferred_lft forever 09:02:09 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 09:02:09 link/ether 02:42:93:31:9d:bd brd ff:ff:ff:ff:ff:ff 09:02:09 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 09:02:09 valid_lft forever preferred_lft forever 09:02:09 09:02:09 09:02:09 ---> sar -b -r -n DEV: 09:02:09 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-25485) 04/24/24 _x86_64_ (8 CPU) 09:02:09 09:02:09 08:54:13 LINUX RESTART (8 CPU) 09:02:09 09:02:09 08:55:01 tps rtps wtps bread/s bwrtn/s 09:02:09 08:56:01 97.80 17.76 80.04 1024.23 27122.28 09:02:09 08:57:01 133.56 23.11 110.45 2777.00 33050.22 09:02:09 08:58:01 234.36 0.15 234.21 17.73 125697.72 09:02:09 08:59:01 336.53 12.18 324.35 790.60 49443.64 09:02:09 09:00:01 19.53 0.00 19.53 0.00 21116.93 09:02:09 09:01:01 22.23 0.08 22.14 9.60 19888.95 09:02:09 09:02:01 77.59 1.93 75.65 111.98 22762.37 09:02:09 Average: 131.65 7.89 123.76 675.86 42725.47 09:02:09 09:02:09 08:55:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 09:02:09 08:56:01 30123272 31706320 2815940 8.55 70636 1822760 1437564 4.23 864608 1658092 154256 09:02:09 08:57:01 28531480 31678444 4407732 13.38 108048 3294364 1397652 4.11 975648 3032952 1282392 09:02:09 08:58:01 25846808 31670976 7092404 21.53 140732 5811800 1489428 4.38 1015992 5548512 574836 09:02:09 08:59:01 23575860 29563080 9363352 28.43 156796 5939368 8873372 26.11 3299312 5454908 1700 09:02:09 09:00:01 23637836 29626120 9301376 28.24 156984 5939912 8835300 26.00 3239440 5453704 188 09:02:09 09:01:01 23682012 29696660 9257200 28.10 157380 5967628 8083732 23.78 3186620 5467668 380 09:02:09 09:02:01 25785340 31617284 7153872 21.72 159308 5800004 1511600 4.45 1298908 5311896 2484 09:02:09 Average: 25883230 30794126 7055982 21.42 135698 4939405 4518378 13.29 1982933 4561105 288034 09:02:09 09:02:09 08:55:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 09:02:09 08:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:02:09 08:56:01 lo 1.67 1.67 0.19 0.19 0.00 0.00 0.00 0.00 09:02:09 08:56:01 ens3 54.41 36.31 838.46 8.03 0.00 0.00 0.00 0.00 09:02:09 08:57:01 br-3de9c8a2e03c 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:02:09 08:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:02:09 08:57:01 lo 7.13 7.13 0.67 0.67 0.00 0.00 0.00 0.00 09:02:09 08:57:01 ens3 329.10 208.78 6522.27 19.13 0.00 0.00 0.00 0.00 09:02:09 08:58:01 br-3de9c8a2e03c 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:02:09 08:58:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:02:09 08:58:01 lo 6.33 6.33 0.65 0.65 0.00 0.00 0.00 0.00 09:02:09 08:58:01 ens3 867.96 475.70 24860.68 33.58 0.00 0.00 0.00 0.00 09:02:09 08:59:01 veth9ef653e 5.80 7.22 0.89 1.00 0.00 0.00 0.00 0.00 09:02:09 08:59:01 veth3781b3a 45.68 39.71 17.14 39.84 0.00 0.00 0.00 0.00 09:02:09 08:59:01 veth211ae86 0.55 0.93 0.06 0.31 0.00 0.00 0.00 0.00 09:02:09 08:59:01 br-3de9c8a2e03c 1.53 1.50 0.90 1.81 0.00 0.00 0.00 0.00 09:02:09 09:00:01 veth9ef653e 0.17 0.35 0.01 0.02 0.00 0.00 0.00 0.00 09:02:09 09:00:01 veth3781b3a 0.50 0.50 0.63 0.08 0.00 0.00 0.00 0.00 09:02:09 09:00:01 veth211ae86 0.23 0.18 0.02 0.01 0.00 0.00 0.00 0.00 09:02:09 09:00:01 br-3de9c8a2e03c 1.57 1.80 0.99 0.24 0.00 0.00 0.00 0.00 09:02:09 09:01:01 veth9ef653e 0.17 0.37 0.01 0.03 0.00 0.00 0.00 0.00 09:02:09 09:01:01 veth3781b3a 0.35 0.42 0.58 0.03 0.00 0.00 0.00 0.00 09:02:09 09:01:01 br-3de9c8a2e03c 1.15 1.45 0.10 0.14 0.00 0.00 0.00 0.00 09:02:09 09:01:01 veth54c7651 0.00 0.45 0.00 0.03 0.00 0.00 0.00 0.00 09:02:09 09:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:02:09 09:02:01 lo 35.44 35.44 6.27 6.27 0.00 0.00 0.00 0.00 09:02:09 09:02:01 ens3 1666.36 1014.00 33076.55 154.60 0.00 0.00 0.00 0.00 09:02:09 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:02:09 Average: lo 4.50 4.50 0.85 0.85 0.00 0.00 0.00 0.00 09:02:09 Average: ens3 189.69 111.19 4614.84 14.00 0.00 0.00 0.00 0.00 09:02:09 09:02:09 09:02:09 ---> sar -P ALL: 09:02:09 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-25485) 04/24/24 _x86_64_ (8 CPU) 09:02:09 09:02:09 08:54:13 LINUX RESTART (8 CPU) 09:02:09 09:02:09 08:55:01 CPU %user %nice %system %iowait %steal %idle 09:02:09 08:56:01 all 9.85 0.00 0.69 3.23 0.03 86.20 09:02:09 08:56:01 0 0.77 0.00 0.40 14.65 0.03 84.15 09:02:09 08:56:01 1 6.74 0.00 0.43 0.08 0.02 92.73 09:02:09 08:56:01 2 3.86 0.00 0.33 0.38 0.02 95.42 09:02:09 08:56:01 3 10.06 0.00 0.58 0.53 0.02 88.81 09:02:09 08:56:01 4 25.61 0.00 1.18 1.22 0.07 71.92 09:02:09 08:56:01 5 9.03 0.00 0.72 0.25 0.02 89.99 09:02:09 08:56:01 6 5.63 0.00 0.57 0.15 0.00 93.65 09:02:09 08:56:01 7 17.20 0.00 1.32 8.63 0.07 72.77 09:02:09 08:57:01 all 11.50 0.00 2.42 2.73 0.04 83.30 09:02:09 08:57:01 0 4.46 0.00 2.72 11.94 0.03 80.85 09:02:09 08:57:01 1 16.78 0.00 2.59 1.45 0.05 79.13 09:02:09 08:57:01 2 11.10 0.00 2.32 0.25 0.05 86.28 09:02:09 08:57:01 3 14.45 0.00 2.72 3.13 0.07 79.63 09:02:09 08:57:01 4 12.20 0.00 1.95 0.15 0.03 85.67 09:02:09 08:57:01 5 4.61 0.00 1.56 0.67 0.02 93.15 09:02:09 08:57:01 6 17.05 0.00 2.50 2.28 0.05 78.12 09:02:09 08:57:01 7 11.35 0.00 3.00 1.98 0.03 83.64 09:02:09 08:58:01 all 9.27 0.00 4.06 10.15 0.07 76.45 09:02:09 08:58:01 0 7.90 0.00 3.16 0.86 0.05 88.03 09:02:09 08:58:01 1 9.61 0.00 4.54 6.14 0.07 79.64 09:02:09 08:58:01 2 11.42 0.00 4.12 0.22 0.07 84.18 09:02:09 08:58:01 3 7.12 0.00 4.99 11.83 0.03 76.03 09:02:09 08:58:01 4 8.12 0.00 3.61 41.78 0.07 46.42 09:02:09 08:58:01 5 10.77 0.00 4.07 0.91 0.07 84.18 09:02:09 08:58:01 6 9.52 0.00 4.53 15.34 0.14 70.47 09:02:09 08:58:01 7 9.74 0.00 3.44 4.13 0.07 82.62 09:02:09 08:59:01 all 27.23 0.00 3.79 5.12 0.13 63.74 09:02:09 08:59:01 0 30.07 0.00 4.43 3.61 0.10 61.79 09:02:09 08:59:01 1 22.15 0.00 3.28 2.22 0.12 72.23 09:02:09 08:59:01 2 29.34 0.00 3.68 6.34 0.12 60.52 09:02:09 08:59:01 3 29.01 0.00 4.14 2.99 0.12 63.74 09:02:09 08:59:01 4 28.70 0.00 4.02 7.20 0.27 59.81 09:02:09 08:59:01 5 26.39 0.00 3.67 3.76 0.10 66.08 09:02:09 08:59:01 6 21.94 0.00 3.49 13.34 0.10 61.14 09:02:09 08:59:01 7 30.22 0.00 3.56 1.52 0.10 64.60 09:02:09 09:00:01 all 3.90 0.00 0.39 1.18 0.06 94.46 09:02:09 09:00:01 0 5.13 0.00 0.43 9.04 0.08 85.32 09:02:09 09:00:01 1 3.90 0.00 0.33 0.00 0.07 95.70 09:02:09 09:00:01 2 3.37 0.00 0.32 0.10 0.05 96.16 09:02:09 09:00:01 3 3.99 0.00 0.40 0.02 0.03 95.56 09:02:09 09:00:01 4 4.04 0.00 0.45 0.10 0.05 95.36 09:02:09 09:00:01 5 2.97 0.00 0.28 0.05 0.05 96.65 09:02:09 09:00:01 6 4.74 0.00 0.50 0.15 0.05 94.55 09:02:09 09:00:01 7 3.09 0.00 0.45 0.00 0.08 96.37 09:02:09 09:01:01 all 1.28 0.00 0.32 2.66 0.05 95.69 09:02:09 09:01:01 0 0.90 0.00 0.28 19.34 0.03 79.44 09:02:09 09:01:01 1 1.02 0.00 0.22 0.58 0.08 98.10 09:02:09 09:01:01 2 1.30 0.00 0.35 0.27 0.02 98.07 09:02:09 09:01:01 3 3.17 0.00 0.38 0.48 0.10 95.86 09:02:09 09:01:01 4 1.03 0.00 0.25 0.02 0.05 98.65 09:02:09 09:01:01 5 0.92 0.00 0.28 0.07 0.03 98.70 09:02:09 09:01:01 6 0.80 0.00 0.40 0.00 0.05 98.75 09:02:09 09:01:01 7 1.10 0.00 0.37 0.50 0.05 97.98 09:02:09 09:02:01 all 6.77 0.00 0.67 2.84 0.04 89.67 09:02:09 09:02:01 0 2.62 0.00 0.57 0.20 0.03 96.58 09:02:09 09:02:01 1 6.84 0.00 0.72 2.44 0.03 89.97 09:02:09 09:02:01 2 6.59 0.00 0.62 0.82 0.03 91.94 09:02:09 09:02:01 3 2.22 0.00 0.53 12.60 0.03 84.61 09:02:09 09:02:01 4 5.10 0.00 0.50 0.23 0.03 94.13 09:02:09 09:02:01 5 9.48 0.00 0.69 1.61 0.03 88.19 09:02:09 09:02:01 6 14.27 0.00 0.84 3.60 0.07 81.23 09:02:09 09:02:01 7 7.08 0.00 0.87 1.24 0.07 90.75 09:02:09 Average: all 9.95 0.00 1.75 3.98 0.06 84.25 09:02:09 Average: 0 7.39 0.00 1.71 8.53 0.05 82.31 09:02:09 Average: 1 9.56 0.00 1.72 1.83 0.06 86.82 09:02:09 Average: 2 9.53 0.00 1.67 1.19 0.05 87.57 09:02:09 Average: 3 9.99 0.00 1.95 4.50 0.06 83.50 09:02:09 Average: 4 12.11 0.00 1.70 7.19 0.08 78.92 09:02:09 Average: 5 9.14 0.00 1.60 1.04 0.05 88.16 09:02:09 Average: 6 10.54 0.00 1.82 4.95 0.06 82.62 09:02:09 Average: 7 11.38 0.00 1.85 2.57 0.07 84.13 09:02:09 09:02:09 09:02:09