23:10:58 Started by timer 23:10:58 Running as SYSTEM 23:10:58 [EnvInject] - Loading node environment variables. 23:10:58 Building remotely on prd-ubuntu1804-docker-8c-8g-36303 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 23:10:58 [ssh-agent] Looking for ssh-agent implementation... 23:10:58 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 23:10:58 $ ssh-agent 23:10:58 SSH_AUTH_SOCK=/tmp/ssh-ZPL5ujerc8EZ/agent.2077 23:10:58 SSH_AGENT_PID=2079 23:10:58 [ssh-agent] Started. 23:10:58 Running ssh-add (command line suppressed) 23:10:58 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_17948159731780089405.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_17948159731780089405.key) 23:10:58 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 23:10:58 The recommended git tool is: NONE 23:11:00 using credential onap-jenkins-ssh 23:11:00 Wiping out workspace first. 23:11:00 Cloning the remote Git repository 23:11:00 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 23:11:00 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 23:11:00 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 23:11:00 > git --version # timeout=10 23:11:00 > git --version # 'git version 2.17.1' 23:11:00 using GIT_SSH to set credentials Gerrit user 23:11:00 Verifying host key using manually-configured host key entries 23:11:00 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 23:11:01 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 23:11:01 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 23:11:01 Avoid second fetch 23:11:01 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 23:11:01 Checking out Revision 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb (refs/remotes/origin/master) 23:11:01 > git config core.sparsecheckout # timeout=10 23:11:01 > git checkout -f 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb # timeout=30 23:11:02 Commit message: "Update snapshot and/or references of policy/docker to latest snapshots" 23:11:02 > git rev-list --no-walk 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb # timeout=10 23:11:02 provisioning config files... 23:11:02 copy managed file [npmrc] to file:/home/jenkins/.npmrc 23:11:02 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 23:11:02 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins13088569064281028346.sh 23:11:02 ---> python-tools-install.sh 23:11:02 Setup pyenv: 23:11:02 * system (set by /opt/pyenv/version) 23:11:02 * 3.8.13 (set by /opt/pyenv/version) 23:11:02 * 3.9.13 (set by /opt/pyenv/version) 23:11:02 * 3.10.6 (set by /opt/pyenv/version) 23:11:06 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-KuFe 23:11:06 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 23:11:10 lf-activate-venv(): INFO: Installing: lftools 23:11:50 lf-activate-venv(): INFO: Adding /tmp/venv-KuFe/bin to PATH 23:11:50 Generating Requirements File 23:12:21 Python 3.10.6 23:12:22 pip 24.0 from /tmp/venv-KuFe/lib/python3.10/site-packages/pip (python 3.10) 23:12:22 appdirs==1.4.4 23:12:22 argcomplete==3.3.0 23:12:22 aspy.yaml==1.3.0 23:12:22 attrs==23.2.0 23:12:22 autopage==0.5.2 23:12:22 beautifulsoup4==4.12.3 23:12:22 boto3==1.34.94 23:12:22 botocore==1.34.94 23:12:22 bs4==0.0.2 23:12:22 cachetools==5.3.3 23:12:22 certifi==2024.2.2 23:12:22 cffi==1.16.0 23:12:22 cfgv==3.4.0 23:12:22 chardet==5.2.0 23:12:22 charset-normalizer==3.3.2 23:12:22 click==8.1.7 23:12:22 cliff==4.6.0 23:12:22 cmd2==2.4.3 23:12:22 cryptography==3.3.2 23:12:22 debtcollector==3.0.0 23:12:22 decorator==5.1.1 23:12:22 defusedxml==0.7.1 23:12:22 Deprecated==1.2.14 23:12:22 distlib==0.3.8 23:12:22 dnspython==2.6.1 23:12:22 docker==4.2.2 23:12:22 dogpile.cache==1.3.2 23:12:22 email_validator==2.1.1 23:12:22 filelock==3.14.0 23:12:22 future==1.0.0 23:12:22 gitdb==4.0.11 23:12:22 GitPython==3.1.43 23:12:22 google-auth==2.29.0 23:12:22 httplib2==0.22.0 23:12:22 identify==2.5.36 23:12:22 idna==3.7 23:12:22 importlib-resources==1.5.0 23:12:22 iso8601==2.1.0 23:12:22 Jinja2==3.1.3 23:12:22 jmespath==1.0.1 23:12:22 jsonpatch==1.33 23:12:22 jsonpointer==2.4 23:12:22 jsonschema==4.21.1 23:12:22 jsonschema-specifications==2023.12.1 23:12:22 keystoneauth1==5.6.0 23:12:22 kubernetes==29.0.0 23:12:22 lftools==0.37.10 23:12:22 lxml==5.2.1 23:12:22 MarkupSafe==2.1.5 23:12:22 msgpack==1.0.8 23:12:22 multi_key_dict==2.0.3 23:12:22 munch==4.0.0 23:12:22 netaddr==1.2.1 23:12:22 netifaces==0.11.0 23:12:22 niet==1.4.2 23:12:22 nodeenv==1.8.0 23:12:22 oauth2client==4.1.3 23:12:22 oauthlib==3.2.2 23:12:22 openstacksdk==3.1.0 23:12:22 os-client-config==2.1.0 23:12:22 os-service-types==1.7.0 23:12:22 osc-lib==3.0.1 23:12:22 oslo.config==9.4.0 23:12:22 oslo.context==5.5.0 23:12:22 oslo.i18n==6.3.0 23:12:22 oslo.log==5.5.1 23:12:22 oslo.serialization==5.4.0 23:12:22 oslo.utils==7.1.0 23:12:22 packaging==24.0 23:12:22 pbr==6.0.0 23:12:22 platformdirs==4.2.1 23:12:22 prettytable==3.10.0 23:12:22 pyasn1==0.6.0 23:12:22 pyasn1_modules==0.4.0 23:12:22 pycparser==2.22 23:12:22 pygerrit2==2.0.15 23:12:22 PyGithub==2.3.0 23:12:22 pyinotify==0.9.6 23:12:22 PyJWT==2.8.0 23:12:22 PyNaCl==1.5.0 23:12:22 pyparsing==2.4.7 23:12:22 pyperclip==1.8.2 23:12:22 pyrsistent==0.20.0 23:12:22 python-cinderclient==9.5.0 23:12:22 python-dateutil==2.9.0.post0 23:12:22 python-heatclient==3.5.0 23:12:22 python-jenkins==1.8.2 23:12:22 python-keystoneclient==5.4.0 23:12:22 python-magnumclient==4.4.0 23:12:22 python-novaclient==18.6.0 23:12:22 python-openstackclient==6.6.0 23:12:22 python-swiftclient==4.5.0 23:12:22 PyYAML==6.0.1 23:12:22 referencing==0.35.0 23:12:22 requests==2.31.0 23:12:22 requests-oauthlib==2.0.0 23:12:22 requestsexceptions==1.4.0 23:12:22 rfc3986==2.0.0 23:12:22 rpds-py==0.18.0 23:12:22 rsa==4.9 23:12:22 ruamel.yaml==0.18.6 23:12:22 ruamel.yaml.clib==0.2.8 23:12:22 s3transfer==0.10.1 23:12:22 simplejson==3.19.2 23:12:22 six==1.16.0 23:12:22 smmap==5.0.1 23:12:22 soupsieve==2.5 23:12:22 stevedore==5.2.0 23:12:22 tabulate==0.9.0 23:12:22 toml==0.10.2 23:12:22 tomlkit==0.12.4 23:12:22 tqdm==4.66.2 23:12:22 typing_extensions==4.11.0 23:12:22 tzdata==2024.1 23:12:22 urllib3==1.26.18 23:12:22 virtualenv==20.26.1 23:12:22 wcwidth==0.2.13 23:12:22 websocket-client==1.8.0 23:12:22 wrapt==1.16.0 23:12:22 xdg==6.0.0 23:12:22 xmltodict==0.13.0 23:12:22 yq==3.4.3 23:12:22 [EnvInject] - Injecting environment variables from a build step. 23:12:22 [EnvInject] - Injecting as environment variables the properties content 23:12:22 SET_JDK_VERSION=openjdk17 23:12:22 GIT_URL="git://cloud.onap.org/mirror" 23:12:22 23:12:22 [EnvInject] - Variables injected successfully. 23:12:22 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins14905730535557814642.sh 23:12:22 ---> update-java-alternatives.sh 23:12:22 ---> Updating Java version 23:12:22 ---> Ubuntu/Debian system detected 23:12:23 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 23:12:23 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 23:12:23 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 23:12:23 openjdk version "17.0.4" 2022-07-19 23:12:23 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 23:12:23 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 23:12:23 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 23:12:23 [EnvInject] - Injecting environment variables from a build step. 23:12:23 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 23:12:23 [EnvInject] - Variables injected successfully. 23:12:23 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins5636856194194881287.sh 23:12:23 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 23:12:23 + set +u 23:12:23 + save_set 23:12:23 + RUN_CSIT_SAVE_SET=ehxB 23:12:23 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 23:12:23 + '[' 1 -eq 0 ']' 23:12:23 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:23 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:23 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:23 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:23 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:23 + export ROBOT_VARIABLES= 23:12:23 + ROBOT_VARIABLES= 23:12:23 + export PROJECT=pap 23:12:23 + PROJECT=pap 23:12:23 + cd /w/workspace/policy-pap-master-project-csit-pap 23:12:23 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:23 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:23 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:23 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 23:12:23 + relax_set 23:12:23 + set +e 23:12:23 + set +o pipefail 23:12:23 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:23 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:23 +++ mktemp -d 23:12:23 ++ ROBOT_VENV=/tmp/tmp.syqikLG7Oa 23:12:23 ++ echo ROBOT_VENV=/tmp/tmp.syqikLG7Oa 23:12:23 +++ python3 --version 23:12:23 ++ echo 'Python version is: Python 3.6.9' 23:12:23 Python version is: Python 3.6.9 23:12:23 ++ python3 -m venv --clear /tmp/tmp.syqikLG7Oa 23:12:25 ++ source /tmp/tmp.syqikLG7Oa/bin/activate 23:12:25 +++ deactivate nondestructive 23:12:25 +++ '[' -n '' ']' 23:12:25 +++ '[' -n '' ']' 23:12:25 +++ '[' -n /bin/bash -o -n '' ']' 23:12:25 +++ hash -r 23:12:25 +++ '[' -n '' ']' 23:12:25 +++ unset VIRTUAL_ENV 23:12:25 +++ '[' '!' nondestructive = nondestructive ']' 23:12:25 +++ VIRTUAL_ENV=/tmp/tmp.syqikLG7Oa 23:12:25 +++ export VIRTUAL_ENV 23:12:25 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:25 +++ PATH=/tmp/tmp.syqikLG7Oa/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:25 +++ export PATH 23:12:25 +++ '[' -n '' ']' 23:12:25 +++ '[' -z '' ']' 23:12:25 +++ _OLD_VIRTUAL_PS1= 23:12:25 +++ '[' 'x(tmp.syqikLG7Oa) ' '!=' x ']' 23:12:25 +++ PS1='(tmp.syqikLG7Oa) ' 23:12:25 +++ export PS1 23:12:25 +++ '[' -n /bin/bash -o -n '' ']' 23:12:25 +++ hash -r 23:12:25 ++ set -exu 23:12:25 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 23:12:28 ++ echo 'Installing Python Requirements' 23:12:28 Installing Python Requirements 23:12:28 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 23:12:48 ++ python3 -m pip -qq freeze 23:12:49 bcrypt==4.0.1 23:12:49 beautifulsoup4==4.12.3 23:12:49 bitarray==2.9.2 23:12:49 certifi==2024.2.2 23:12:49 cffi==1.15.1 23:12:49 charset-normalizer==2.0.12 23:12:49 cryptography==40.0.2 23:12:49 decorator==5.1.1 23:12:49 elasticsearch==7.17.9 23:12:49 elasticsearch-dsl==7.4.1 23:12:49 enum34==1.1.10 23:12:49 idna==3.7 23:12:49 importlib-resources==5.4.0 23:12:49 ipaddr==2.2.0 23:12:49 isodate==0.6.1 23:12:49 jmespath==0.10.0 23:12:49 jsonpatch==1.32 23:12:49 jsonpath-rw==1.4.0 23:12:49 jsonpointer==2.3 23:12:49 lxml==5.2.1 23:12:49 netaddr==0.8.0 23:12:49 netifaces==0.11.0 23:12:49 odltools==0.1.28 23:12:49 paramiko==3.4.0 23:12:49 pkg_resources==0.0.0 23:12:49 ply==3.11 23:12:49 pyang==2.6.0 23:12:49 pyangbind==0.8.1 23:12:49 pycparser==2.21 23:12:49 pyhocon==0.3.60 23:12:49 PyNaCl==1.5.0 23:12:49 pyparsing==3.1.2 23:12:49 python-dateutil==2.9.0.post0 23:12:49 regex==2023.8.8 23:12:49 requests==2.27.1 23:12:49 robotframework==6.1.1 23:12:49 robotframework-httplibrary==0.4.2 23:12:49 robotframework-pythonlibcore==3.0.0 23:12:49 robotframework-requests==0.9.4 23:12:49 robotframework-selenium2library==3.0.0 23:12:49 robotframework-seleniumlibrary==5.1.3 23:12:49 robotframework-sshlibrary==3.8.0 23:12:49 scapy==2.5.0 23:12:49 scp==0.14.5 23:12:49 selenium==3.141.0 23:12:49 six==1.16.0 23:12:49 soupsieve==2.3.2.post1 23:12:49 urllib3==1.26.18 23:12:49 waitress==2.0.0 23:12:49 WebOb==1.8.7 23:12:49 WebTest==3.0.0 23:12:49 zipp==3.6.0 23:12:49 ++ mkdir -p /tmp/tmp.syqikLG7Oa/src/onap 23:12:49 ++ rm -rf /tmp/tmp.syqikLG7Oa/src/onap/testsuite 23:12:49 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 23:12:55 ++ echo 'Installing python confluent-kafka library' 23:12:55 Installing python confluent-kafka library 23:12:55 ++ python3 -m pip install -qq confluent-kafka 23:12:57 ++ echo 'Uninstall docker-py and reinstall docker.' 23:12:57 Uninstall docker-py and reinstall docker. 23:12:57 ++ python3 -m pip uninstall -y -qq docker 23:12:57 ++ python3 -m pip install -U -qq docker 23:12:58 ++ python3 -m pip -qq freeze 23:12:59 bcrypt==4.0.1 23:12:59 beautifulsoup4==4.12.3 23:12:59 bitarray==2.9.2 23:12:59 certifi==2024.2.2 23:12:59 cffi==1.15.1 23:12:59 charset-normalizer==2.0.12 23:12:59 confluent-kafka==2.3.0 23:12:59 cryptography==40.0.2 23:12:59 decorator==5.1.1 23:12:59 deepdiff==5.7.0 23:12:59 dnspython==2.2.1 23:12:59 docker==5.0.3 23:12:59 elasticsearch==7.17.9 23:12:59 elasticsearch-dsl==7.4.1 23:12:59 enum34==1.1.10 23:12:59 future==1.0.0 23:12:59 idna==3.7 23:12:59 importlib-resources==5.4.0 23:12:59 ipaddr==2.2.0 23:12:59 isodate==0.6.1 23:12:59 Jinja2==3.0.3 23:12:59 jmespath==0.10.0 23:12:59 jsonpatch==1.32 23:12:59 jsonpath-rw==1.4.0 23:12:59 jsonpointer==2.3 23:12:59 kafka-python==2.0.2 23:12:59 lxml==5.2.1 23:12:59 MarkupSafe==2.0.1 23:12:59 more-itertools==5.0.0 23:12:59 netaddr==0.8.0 23:12:59 netifaces==0.11.0 23:12:59 odltools==0.1.28 23:12:59 ordered-set==4.0.2 23:12:59 paramiko==3.4.0 23:12:59 pbr==6.0.0 23:12:59 pkg_resources==0.0.0 23:12:59 ply==3.11 23:12:59 protobuf==3.19.6 23:12:59 pyang==2.6.0 23:12:59 pyangbind==0.8.1 23:12:59 pycparser==2.21 23:12:59 pyhocon==0.3.60 23:12:59 PyNaCl==1.5.0 23:12:59 pyparsing==3.1.2 23:12:59 python-dateutil==2.9.0.post0 23:12:59 PyYAML==6.0.1 23:12:59 regex==2023.8.8 23:12:59 requests==2.27.1 23:12:59 robotframework==6.1.1 23:12:59 robotframework-httplibrary==0.4.2 23:12:59 robotframework-onap==0.6.0.dev105 23:12:59 robotframework-pythonlibcore==3.0.0 23:12:59 robotframework-requests==0.9.4 23:12:59 robotframework-selenium2library==3.0.0 23:12:59 robotframework-seleniumlibrary==5.1.3 23:12:59 robotframework-sshlibrary==3.8.0 23:12:59 robotlibcore-temp==1.0.2 23:12:59 scapy==2.5.0 23:12:59 scp==0.14.5 23:12:59 selenium==3.141.0 23:12:59 six==1.16.0 23:12:59 soupsieve==2.3.2.post1 23:12:59 urllib3==1.26.18 23:12:59 waitress==2.0.0 23:12:59 WebOb==1.8.7 23:12:59 websocket-client==1.3.1 23:12:59 WebTest==3.0.0 23:12:59 zipp==3.6.0 23:12:59 ++ uname 23:12:59 ++ grep -q Linux 23:12:59 ++ sudo apt-get -y -qq install libxml2-utils 23:12:59 + load_set 23:12:59 + _setopts=ehuxB 23:12:59 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 23:12:59 ++ tr : ' ' 23:12:59 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:59 + set +o braceexpand 23:12:59 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:59 + set +o hashall 23:12:59 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:59 + set +o interactive-comments 23:12:59 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:59 + set +o nounset 23:12:59 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:59 + set +o xtrace 23:12:59 ++ echo ehuxB 23:12:59 ++ sed 's/./& /g' 23:12:59 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:59 + set +e 23:12:59 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:59 + set +h 23:12:59 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:59 + set +u 23:12:59 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:59 + set +x 23:12:59 + source_safely /tmp/tmp.syqikLG7Oa/bin/activate 23:12:59 + '[' -z /tmp/tmp.syqikLG7Oa/bin/activate ']' 23:12:59 + relax_set 23:12:59 + set +e 23:12:59 + set +o pipefail 23:12:59 + . /tmp/tmp.syqikLG7Oa/bin/activate 23:12:59 ++ deactivate nondestructive 23:12:59 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 23:12:59 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:59 ++ export PATH 23:12:59 ++ unset _OLD_VIRTUAL_PATH 23:12:59 ++ '[' -n '' ']' 23:12:59 ++ '[' -n /bin/bash -o -n '' ']' 23:12:59 ++ hash -r 23:12:59 ++ '[' -n '' ']' 23:12:59 ++ unset VIRTUAL_ENV 23:12:59 ++ '[' '!' nondestructive = nondestructive ']' 23:12:59 ++ VIRTUAL_ENV=/tmp/tmp.syqikLG7Oa 23:12:59 ++ export VIRTUAL_ENV 23:12:59 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:59 ++ PATH=/tmp/tmp.syqikLG7Oa/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:59 ++ export PATH 23:12:59 ++ '[' -n '' ']' 23:12:59 ++ '[' -z '' ']' 23:12:59 ++ _OLD_VIRTUAL_PS1='(tmp.syqikLG7Oa) ' 23:12:59 ++ '[' 'x(tmp.syqikLG7Oa) ' '!=' x ']' 23:12:59 ++ PS1='(tmp.syqikLG7Oa) (tmp.syqikLG7Oa) ' 23:12:59 ++ export PS1 23:12:59 ++ '[' -n /bin/bash -o -n '' ']' 23:12:59 ++ hash -r 23:12:59 + load_set 23:12:59 + _setopts=hxB 23:12:59 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:12:59 ++ tr : ' ' 23:12:59 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:59 + set +o braceexpand 23:12:59 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:59 + set +o hashall 23:12:59 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:59 + set +o interactive-comments 23:12:59 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:59 + set +o xtrace 23:12:59 ++ echo hxB 23:12:59 ++ sed 's/./& /g' 23:12:59 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:59 + set +h 23:12:59 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:59 + set +x 23:12:59 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:59 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:59 + export TEST_OPTIONS= 23:12:59 + TEST_OPTIONS= 23:12:59 ++ mktemp -d 23:12:59 + WORKDIR=/tmp/tmp.R5STfyAPO3 23:12:59 + cd /tmp/tmp.R5STfyAPO3 23:12:59 + docker login -u docker -p docker nexus3.onap.org:10001 23:12:59 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 23:13:00 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 23:13:00 Configure a credential helper to remove this warning. See 23:13:00 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 23:13:00 23:13:00 Login Succeeded 23:13:00 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:13:00 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:13:00 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 23:13:00 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:13:00 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:13:00 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:13:00 + relax_set 23:13:00 + set +e 23:13:00 + set +o pipefail 23:13:00 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:13:00 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 23:13:00 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:13:00 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 23:13:00 +++ GERRIT_BRANCH=master 23:13:00 +++ echo GERRIT_BRANCH=master 23:13:00 GERRIT_BRANCH=master 23:13:00 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:13:00 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 23:13:00 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 23:13:00 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 23:13:01 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:13:01 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:13:01 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:13:01 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:13:01 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:13:01 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:13:01 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 23:13:01 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:13:01 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:13:01 +++ grafana=false 23:13:01 +++ gui=false 23:13:01 +++ [[ 2 -gt 0 ]] 23:13:01 +++ key=apex-pdp 23:13:01 +++ case $key in 23:13:01 +++ echo apex-pdp 23:13:01 apex-pdp 23:13:01 +++ component=apex-pdp 23:13:01 +++ shift 23:13:01 +++ [[ 1 -gt 0 ]] 23:13:01 +++ key=--grafana 23:13:01 +++ case $key in 23:13:01 +++ grafana=true 23:13:01 +++ shift 23:13:01 +++ [[ 0 -gt 0 ]] 23:13:01 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:13:01 +++ echo 'Configuring docker compose...' 23:13:01 Configuring docker compose... 23:13:01 +++ source export-ports.sh 23:13:01 +++ source get-versions.sh 23:13:03 +++ '[' -z pap ']' 23:13:03 +++ '[' -n apex-pdp ']' 23:13:03 +++ '[' apex-pdp == logs ']' 23:13:03 +++ '[' true = true ']' 23:13:03 +++ echo 'Starting apex-pdp application with Grafana' 23:13:03 Starting apex-pdp application with Grafana 23:13:03 +++ docker-compose up -d apex-pdp grafana 23:13:04 Creating network "compose_default" with the default driver 23:13:05 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 23:13:05 latest: Pulling from prom/prometheus 23:13:08 Digest: sha256:4f6c47e39a9064028766e8c95890ed15690c30f00c4ba14e7ce6ae1ded0295b1 23:13:08 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 23:13:08 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 23:13:08 latest: Pulling from grafana/grafana 23:13:13 Digest: sha256:7d5faae481a4c6f436c99e98af11534f7fd5e8d3e35213552dd1dd02bc393d2e 23:13:13 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 23:13:13 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 23:13:14 10.10.2: Pulling from mariadb 23:13:19 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 23:13:19 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 23:13:19 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.3-SNAPSHOT)... 23:13:19 3.1.3-SNAPSHOT: Pulling from onap/policy-models-simulator 23:13:23 Digest: sha256:f41ae0e698a7eee4268ba3d29c141e50ab86dbca0876f787d3d80e16d6bffd9e 23:13:23 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.3-SNAPSHOT 23:13:23 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 23:13:24 latest: Pulling from confluentinc/cp-zookeeper 23:13:36 Digest: sha256:4dc780642bfc5ec3a2d4901e2ff1f9ddef7f7c5c0b793e1e2911cbfb4e3a3214 23:13:36 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 23:13:36 Pulling kafka (confluentinc/cp-kafka:latest)... 23:13:37 latest: Pulling from confluentinc/cp-kafka 23:13:50 Digest: sha256:620734d9fc0bb1f9886932e5baf33806074469f40e3fe246a3fdbb59309535fa 23:13:51 Status: Downloaded newer image for confluentinc/cp-kafka:latest 23:13:53 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.3-SNAPSHOT)... 23:13:54 3.1.3-SNAPSHOT: Pulling from onap/policy-db-migrator 23:13:56 Digest: sha256:4f56cebbee7604f04c833f29e04489e7d96d27f105a76e14f99d491eae674a75 23:13:56 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.3-SNAPSHOT 23:13:56 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.3-SNAPSHOT)... 23:13:57 3.1.3-SNAPSHOT: Pulling from onap/policy-api 23:13:58 Digest: sha256:7fad0e07e4ad14d7b1ec6aec34f8583031a00f072037db0e6764795a9c95f7fd 23:13:58 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.3-SNAPSHOT 23:13:58 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.3-SNAPSHOT)... 23:13:58 3.1.3-SNAPSHOT: Pulling from onap/policy-pap 23:14:04 Digest: sha256:7f3b58c4f9b75937b65a0c67c12bb88aa2c134f077126cfa8a21b501b6bc004c 23:14:04 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.3-SNAPSHOT 23:14:04 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.3-SNAPSHOT)... 23:14:05 3.1.3-SNAPSHOT: Pulling from onap/policy-apex-pdp 23:14:13 Digest: sha256:8770653266299381ba06ecf1ac20de5cc32cd747d987933c80da099704d6db0f 23:14:13 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.3-SNAPSHOT 23:14:13 Creating simulator ... 23:14:13 Creating zookeeper ... 23:14:13 Creating mariadb ... 23:14:13 Creating prometheus ... 23:14:24 Creating zookeeper ... done 23:14:24 Creating kafka ... 23:14:25 Creating kafka ... done 23:14:26 Creating prometheus ... done 23:14:26 Creating grafana ... 23:14:27 Creating grafana ... done 23:14:28 Creating mariadb ... done 23:14:28 Creating policy-db-migrator ... 23:14:29 Creating policy-db-migrator ... done 23:14:29 Creating policy-api ... 23:14:30 Creating policy-api ... done 23:14:30 Creating policy-pap ... 23:14:31 Creating policy-pap ... done 23:14:32 Creating simulator ... done 23:14:32 Creating policy-apex-pdp ... 23:14:33 Creating policy-apex-pdp ... done 23:14:33 +++ echo 'Prometheus server: http://localhost:30259' 23:14:33 Prometheus server: http://localhost:30259 23:14:33 +++ echo 'Grafana server: http://localhost:30269' 23:14:33 Grafana server: http://localhost:30269 23:14:33 +++ cd /w/workspace/policy-pap-master-project-csit-pap 23:14:33 ++ sleep 10 23:14:43 ++ unset http_proxy https_proxy 23:14:43 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 23:14:43 Waiting for REST to come up on localhost port 30003... 23:14:43 NAMES STATUS 23:14:43 policy-apex-pdp Up 10 seconds 23:14:43 policy-pap Up 12 seconds 23:14:43 policy-api Up 13 seconds 23:14:43 grafana Up 16 seconds 23:14:43 kafka Up 18 seconds 23:14:43 prometheus Up 17 seconds 23:14:43 mariadb Up 15 seconds 23:14:43 zookeeper Up 19 seconds 23:14:43 simulator Up 11 seconds 23:14:48 NAMES STATUS 23:14:48 policy-apex-pdp Up 15 seconds 23:14:48 policy-pap Up 17 seconds 23:14:48 policy-api Up 18 seconds 23:14:48 grafana Up 21 seconds 23:14:48 kafka Up 23 seconds 23:14:48 prometheus Up 22 seconds 23:14:48 mariadb Up 20 seconds 23:14:48 zookeeper Up 24 seconds 23:14:48 simulator Up 16 seconds 23:14:53 NAMES STATUS 23:14:53 policy-apex-pdp Up 20 seconds 23:14:53 policy-pap Up 22 seconds 23:14:53 policy-api Up 23 seconds 23:14:53 grafana Up 26 seconds 23:14:53 kafka Up 28 seconds 23:14:53 prometheus Up 27 seconds 23:14:53 mariadb Up 25 seconds 23:14:53 zookeeper Up 29 seconds 23:14:53 simulator Up 21 seconds 23:14:58 NAMES STATUS 23:14:58 policy-apex-pdp Up 25 seconds 23:14:58 policy-pap Up 27 seconds 23:14:58 policy-api Up 28 seconds 23:14:58 grafana Up 31 seconds 23:14:58 kafka Up 33 seconds 23:14:58 prometheus Up 32 seconds 23:14:58 mariadb Up 30 seconds 23:14:58 zookeeper Up 34 seconds 23:14:58 simulator Up 26 seconds 23:15:03 NAMES STATUS 23:15:03 policy-apex-pdp Up 30 seconds 23:15:03 policy-pap Up 32 seconds 23:15:03 policy-api Up 33 seconds 23:15:03 grafana Up 36 seconds 23:15:03 kafka Up 38 seconds 23:15:03 prometheus Up 37 seconds 23:15:03 mariadb Up 35 seconds 23:15:03 zookeeper Up 39 seconds 23:15:03 simulator Up 31 seconds 23:15:03 ++ export 'SUITES=pap-test.robot 23:15:03 pap-slas.robot' 23:15:03 ++ SUITES='pap-test.robot 23:15:03 pap-slas.robot' 23:15:03 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:15:03 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:15:03 + load_set 23:15:03 + _setopts=hxB 23:15:03 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:15:03 ++ tr : ' ' 23:15:03 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:15:03 + set +o braceexpand 23:15:03 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:15:03 + set +o hashall 23:15:03 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:15:03 + set +o interactive-comments 23:15:03 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:15:03 + set +o xtrace 23:15:03 ++ echo hxB 23:15:03 ++ sed 's/./& /g' 23:15:03 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:15:03 + set +h 23:15:03 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:15:03 + set +x 23:15:03 + docker_stats 23:15:03 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 23:15:03 ++ uname -s 23:15:03 + '[' Linux == Darwin ']' 23:15:03 + sh -c 'top -bn1 | head -3' 23:15:04 top - 23:15:03 up 4 min, 0 users, load average: 2.72, 1.31, 0.54 23:15:04 Tasks: 208 total, 1 running, 130 sleeping, 0 stopped, 0 zombie 23:15:04 %Cpu(s): 12.5 us, 2.5 sy, 0.0 ni, 79.1 id, 5.8 wa, 0.0 hi, 0.1 si, 0.1 st 23:15:04 + echo 23:15:04 + sh -c 'free -h' 23:15:04 23:15:04 total used free shared buff/cache available 23:15:04 Mem: 31G 2.7G 22G 1.3M 6.0G 28G 23:15:04 Swap: 1.0G 0B 1.0G 23:15:04 + echo 23:15:04 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:15:04 23:15:04 NAMES STATUS 23:15:04 policy-apex-pdp Up 30 seconds 23:15:04 policy-pap Up 32 seconds 23:15:04 policy-api Up 33 seconds 23:15:04 grafana Up 36 seconds 23:15:04 kafka Up 39 seconds 23:15:04 prometheus Up 37 seconds 23:15:04 mariadb Up 35 seconds 23:15:04 zookeeper Up 39 seconds 23:15:04 simulator Up 31 seconds 23:15:04 + echo 23:15:04 23:15:04 + docker stats --no-stream 23:15:06 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:15:06 d2831c1bd886 policy-apex-pdp 1.04% 173.3MiB / 31.41GiB 0.54% 7.12kB / 6.86kB 0B / 0B 48 23:15:06 7f8cac653cf8 policy-pap 3.35% 553.4MiB / 31.41GiB 1.72% 31kB / 32.6kB 0B / 149MB 62 23:15:06 89f9a1c57596 policy-api 0.11% 525.5MiB / 31.41GiB 1.63% 988kB / 647kB 0B / 0B 53 23:15:06 c43301a251a0 grafana 0.03% 52.79MiB / 31.41GiB 0.16% 19.5kB / 3.51kB 0B / 24.9MB 14 23:15:06 8e7de2fb72f7 kafka 0.77% 372.9MiB / 31.41GiB 1.16% 69kB / 71.9kB 0B / 508kB 83 23:15:06 41bcee903f85 prometheus 0.16% 18.02MiB / 31.41GiB 0.06% 1.64kB / 474B 225kB / 0B 11 23:15:06 4b6fbb1d3cc2 mariadb 0.02% 102.1MiB / 31.41GiB 0.32% 935kB / 1.18MB 11MB / 67.9MB 37 23:15:06 3ba040297ad4 zookeeper 0.11% 102.4MiB / 31.41GiB 0.32% 56.1kB / 49.3kB 4.1kB / 401kB 60 23:15:06 f8e85637ac06 simulator 0.06% 120.6MiB / 31.41GiB 0.37% 1.15kB / 0B 0B / 0B 76 23:15:06 + echo 23:15:06 23:15:06 + cd /tmp/tmp.R5STfyAPO3 23:15:06 + echo 'Reading the testplan:' 23:15:06 Reading the testplan: 23:15:06 + echo 'pap-test.robot 23:15:06 pap-slas.robot' 23:15:06 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 23:15:06 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 23:15:06 + cat testplan.txt 23:15:06 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 23:15:06 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:15:06 ++ xargs 23:15:06 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 23:15:06 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:15:06 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:15:06 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:15:06 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:15:06 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 23:15:06 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 23:15:06 + relax_set 23:15:06 + set +e 23:15:06 + set +o pipefail 23:15:06 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:15:06 ============================================================================== 23:15:06 pap 23:15:06 ============================================================================== 23:15:06 pap.Pap-Test 23:15:06 ============================================================================== 23:15:07 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 23:15:07 ------------------------------------------------------------------------------ 23:15:08 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 23:15:08 ------------------------------------------------------------------------------ 23:15:08 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 23:15:08 ------------------------------------------------------------------------------ 23:15:09 Healthcheck :: Verify policy pap health check | PASS | 23:15:09 ------------------------------------------------------------------------------ 23:15:29 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 23:15:29 ------------------------------------------------------------------------------ 23:15:29 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 23:15:29 ------------------------------------------------------------------------------ 23:15:30 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 23:15:30 ------------------------------------------------------------------------------ 23:15:30 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 23:15:30 ------------------------------------------------------------------------------ 23:15:30 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 23:15:30 ------------------------------------------------------------------------------ 23:15:30 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 23:15:30 ------------------------------------------------------------------------------ 23:15:30 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 23:15:30 ------------------------------------------------------------------------------ 23:15:31 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 23:15:31 ------------------------------------------------------------------------------ 23:15:31 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 23:15:31 ------------------------------------------------------------------------------ 23:15:31 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 23:15:31 ------------------------------------------------------------------------------ 23:15:31 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 23:15:31 ------------------------------------------------------------------------------ 23:15:31 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 23:15:31 ------------------------------------------------------------------------------ 23:15:32 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 23:15:32 ------------------------------------------------------------------------------ 23:15:52 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 23:15:52 ------------------------------------------------------------------------------ 23:15:52 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 23:15:52 ------------------------------------------------------------------------------ 23:15:52 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 23:15:52 ------------------------------------------------------------------------------ 23:15:52 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 23:15:52 ------------------------------------------------------------------------------ 23:15:52 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 23:15:52 ------------------------------------------------------------------------------ 23:15:52 pap.Pap-Test | PASS | 23:15:52 22 tests, 22 passed, 0 failed 23:15:52 ============================================================================== 23:15:52 pap.Pap-Slas 23:15:52 ============================================================================== 23:16:52 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 23:16:52 ------------------------------------------------------------------------------ 23:16:52 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 23:16:52 ------------------------------------------------------------------------------ 23:16:52 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 23:16:52 ------------------------------------------------------------------------------ 23:16:52 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 23:16:52 ------------------------------------------------------------------------------ 23:16:52 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 23:16:52 ------------------------------------------------------------------------------ 23:16:52 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 23:16:52 ------------------------------------------------------------------------------ 23:16:52 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 23:16:52 ------------------------------------------------------------------------------ 23:16:52 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 23:16:52 ------------------------------------------------------------------------------ 23:16:52 pap.Pap-Slas | PASS | 23:16:52 8 tests, 8 passed, 0 failed 23:16:52 ============================================================================== 23:16:52 pap | PASS | 23:16:52 30 tests, 30 passed, 0 failed 23:16:52 ============================================================================== 23:16:52 Output: /tmp/tmp.R5STfyAPO3/output.xml 23:16:52 Log: /tmp/tmp.R5STfyAPO3/log.html 23:16:52 Report: /tmp/tmp.R5STfyAPO3/report.html 23:16:52 + RESULT=0 23:16:52 + load_set 23:16:52 + _setopts=hxB 23:16:52 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:16:52 ++ tr : ' ' 23:16:52 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:52 + set +o braceexpand 23:16:52 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:52 + set +o hashall 23:16:52 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:52 + set +o interactive-comments 23:16:52 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:52 + set +o xtrace 23:16:52 ++ echo hxB 23:16:52 ++ sed 's/./& /g' 23:16:52 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:52 + set +h 23:16:52 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:52 + set +x 23:16:52 + echo 'RESULT: 0' 23:16:52 RESULT: 0 23:16:52 + exit 0 23:16:52 + on_exit 23:16:52 + rc=0 23:16:52 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 23:16:52 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:52 NAMES STATUS 23:16:52 policy-apex-pdp Up 2 minutes 23:16:52 policy-pap Up 2 minutes 23:16:52 policy-api Up 2 minutes 23:16:52 grafana Up 2 minutes 23:16:52 kafka Up 2 minutes 23:16:52 prometheus Up 2 minutes 23:16:52 mariadb Up 2 minutes 23:16:52 zookeeper Up 2 minutes 23:16:52 simulator Up 2 minutes 23:16:52 + docker_stats 23:16:52 ++ uname -s 23:16:52 + '[' Linux == Darwin ']' 23:16:52 + sh -c 'top -bn1 | head -3' 23:16:53 top - 23:16:53 up 6 min, 0 users, load average: 0.52, 0.93, 0.49 23:16:53 Tasks: 198 total, 1 running, 128 sleeping, 0 stopped, 0 zombie 23:16:53 %Cpu(s): 10.1 us, 1.9 sy, 0.0 ni, 83.6 id, 4.2 wa, 0.0 hi, 0.1 si, 0.1 st 23:16:53 + echo 23:16:53 23:16:53 + sh -c 'free -h' 23:16:53 total used free shared buff/cache available 23:16:53 Mem: 31G 2.7G 22G 1.3M 6.0G 28G 23:16:53 Swap: 1.0G 0B 1.0G 23:16:53 + echo 23:16:53 23:16:53 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:53 NAMES STATUS 23:16:53 policy-apex-pdp Up 2 minutes 23:16:53 policy-pap Up 2 minutes 23:16:53 policy-api Up 2 minutes 23:16:53 grafana Up 2 minutes 23:16:53 kafka Up 2 minutes 23:16:53 prometheus Up 2 minutes 23:16:53 mariadb Up 2 minutes 23:16:53 zookeeper Up 2 minutes 23:16:53 simulator Up 2 minutes 23:16:53 + echo 23:16:53 23:16:53 + docker stats --no-stream 23:16:55 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:16:55 d2831c1bd886 policy-apex-pdp 1.22% 179MiB / 31.41GiB 0.56% 55.3kB / 79.2kB 0B / 0B 52 23:16:55 7f8cac653cf8 policy-pap 1.09% 485.1MiB / 31.41GiB 1.51% 2.47MB / 1.05MB 0B / 149MB 66 23:16:55 89f9a1c57596 policy-api 0.22% 525.3MiB / 31.41GiB 1.63% 2.45MB / 1.1MB 0B / 0B 56 23:16:55 c43301a251a0 grafana 0.04% 63.04MiB / 31.41GiB 0.20% 20.5kB / 4.5kB 0B / 24.9MB 14 23:16:55 8e7de2fb72f7 kafka 1.32% 402.1MiB / 31.41GiB 1.25% 238kB / 214kB 0B / 606kB 85 23:16:55 41bcee903f85 prometheus 0.00% 23.8MiB / 31.41GiB 0.07% 180kB / 10.1kB 225kB / 0B 13 23:16:55 4b6fbb1d3cc2 mariadb 0.01% 103.3MiB / 31.41GiB 0.32% 2.02MB / 4.87MB 11MB / 68.1MB 28 23:16:55 3ba040297ad4 zookeeper 0.12% 102.4MiB / 31.41GiB 0.32% 59kB / 50.8kB 4.1kB / 401kB 60 23:16:55 f8e85637ac06 simulator 0.10% 120.7MiB / 31.41GiB 0.38% 1.37kB / 0B 0B / 0B 78 23:16:55 + echo 23:16:55 23:16:55 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:55 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 23:16:55 + relax_set 23:16:55 + set +e 23:16:55 + set +o pipefail 23:16:55 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:55 ++ echo 'Shut down started!' 23:16:55 Shut down started! 23:16:55 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:16:55 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:16:55 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:16:55 ++ source export-ports.sh 23:16:55 ++ source get-versions.sh 23:16:57 ++ echo 'Collecting logs from docker compose containers...' 23:16:57 Collecting logs from docker compose containers... 23:16:57 ++ docker-compose logs 23:16:59 ++ cat docker_compose.log 23:16:59 Attaching to policy-apex-pdp, policy-pap, policy-api, policy-db-migrator, grafana, kafka, prometheus, mariadb, zookeeper, simulator 23:16:59 grafana | logger=settings t=2024-04-29T23:14:27.292634521Z level=info msg="Starting Grafana" version=10.4.2 commit=701c851be7a930e04fbc6ebb1cd4254da80edd4c branch=v10.4.x compiled=2024-04-29T23:14:27Z 23:16:59 grafana | logger=settings t=2024-04-29T23:14:27.292843334Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 23:16:59 grafana | logger=settings t=2024-04-29T23:14:27.292855144Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 23:16:59 grafana | logger=settings t=2024-04-29T23:14:27.292858714Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 23:16:59 grafana | logger=settings t=2024-04-29T23:14:27.292862714Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 23:16:59 grafana | logger=settings t=2024-04-29T23:14:27.292865664Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 23:16:59 grafana | logger=settings t=2024-04-29T23:14:27.292868724Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 23:16:59 grafana | logger=settings t=2024-04-29T23:14:27.292871404Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 23:16:59 grafana | logger=settings t=2024-04-29T23:14:27.292874324Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 23:16:59 grafana | logger=settings t=2024-04-29T23:14:27.292877184Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 23:16:59 grafana | logger=settings t=2024-04-29T23:14:27.292879764Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 23:16:59 grafana | logger=settings t=2024-04-29T23:14:27.292883524Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 23:16:59 grafana | logger=settings t=2024-04-29T23:14:27.292886584Z level=info msg=Target target=[all] 23:16:59 grafana | logger=settings t=2024-04-29T23:14:27.292892104Z level=info msg="Path Home" path=/usr/share/grafana 23:16:59 grafana | logger=settings t=2024-04-29T23:14:27.292894944Z level=info msg="Path Data" path=/var/lib/grafana 23:16:59 grafana | logger=settings t=2024-04-29T23:14:27.292900324Z level=info msg="Path Logs" path=/var/log/grafana 23:16:59 grafana | logger=settings t=2024-04-29T23:14:27.292903394Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 23:16:59 grafana | logger=settings t=2024-04-29T23:14:27.292906514Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 23:16:59 grafana | logger=settings t=2024-04-29T23:14:27.292909664Z level=info msg="App mode production" 23:16:59 grafana | logger=sqlstore t=2024-04-29T23:14:27.293164937Z level=info msg="Connecting to DB" dbtype=sqlite3 23:16:59 grafana | logger=sqlstore t=2024-04-29T23:14:27.293183717Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.293769313Z level=info msg="Starting DB migrations" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.294636342Z level=info msg="Executing migration" id="create migration_log table" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.295412251Z level=info msg="Migration successfully executed" id="create migration_log table" duration=775.629µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.299894509Z level=info msg="Executing migration" id="create user table" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.300441746Z level=info msg="Migration successfully executed" id="create user table" duration=547.136µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.305916734Z level=info msg="Executing migration" id="add unique index user.login" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.306626362Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=709.028µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.311338722Z level=info msg="Executing migration" id="add unique index user.email" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.312454545Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.116053ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.317067334Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.318039315Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=972.301µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.321709514Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.322309941Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=600.547µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.32692337Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.32965624Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.7312ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.333295139Z level=info msg="Executing migration" id="create user table v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.334531513Z level=info msg="Migration successfully executed" id="create user table v2" duration=1.238574ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.338195702Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.33890265Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=706.268µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.34353885Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.344188337Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=649.247µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.348774836Z level=info msg="Executing migration" id="copy data_source v1 to v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.349355782Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=583.396µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.35370113Z level=info msg="Executing migration" id="Drop old table user_v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.35471145Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=1.007381ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.359427251Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.360587333Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.159842ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.364556966Z level=info msg="Executing migration" id="Update user table charset" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.364585136Z level=info msg="Migration successfully executed" id="Update user table charset" duration=31.43µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.368231246Z level=info msg="Executing migration" id="Add last_seen_at column to user" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.369364778Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.133422ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.373029407Z level=info msg="Executing migration" id="Add missing user data" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.37325398Z level=info msg="Migration successfully executed" id="Add missing user data" duration=224.393µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.377765259Z level=info msg="Executing migration" id="Add is_disabled column to user" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.379681309Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.91573ms 23:16:59 kafka | ===> User 23:16:59 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:59 kafka | ===> Configuring ... 23:16:59 kafka | Running in Zookeeper mode... 23:16:59 kafka | ===> Running preflight checks ... 23:16:59 kafka | ===> Check if /var/lib/kafka/data is writable ... 23:16:59 kafka | ===> Check if Zookeeper is healthy ... 23:16:59 kafka | [2024-04-29 23:14:28,879] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:28,879] INFO Client environment:host.name=8e7de2fb72f7 (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:28,879] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:28,879] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:28,879] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:28,879] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.1-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.6.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.1-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.1-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.6.1.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.1.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.1-ccs.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.1-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.1-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:28,880] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:28,880] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:28,880] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:28,880] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:28,880] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:28,880] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:28,880] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:28,880] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:28,880] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:28,880] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:28,880] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:28,880] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:28,883] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@b7f23d9 (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:28,887] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:59 kafka | [2024-04-29 23:14:28,891] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:59 kafka | [2024-04-29 23:14:28,898] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:59 kafka | [2024-04-29 23:14:28,918] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 23:16:59 kafka | [2024-04-29 23:14:28,919] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 23:16:59 kafka | [2024-04-29 23:14:28,929] INFO Socket connection established, initiating session, client: /172.17.0.6:35394, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 23:16:59 kafka | [2024-04-29 23:14:28,962] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000003dd5c0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 23:16:59 kafka | [2024-04-29 23:14:29,080] INFO Session: 0x1000003dd5c0000 closed (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:29,081] INFO EventThread shut down for session: 0x1000003dd5c0000 (org.apache.zookeeper.ClientCnxn) 23:16:59 kafka | Using log4j config /etc/kafka/log4j.properties 23:16:59 kafka | ===> Launching ... 23:16:59 kafka | ===> Launching kafka ... 23:16:59 kafka | [2024-04-29 23:14:29,807] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 23:16:59 kafka | [2024-04-29 23:14:30,117] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:59 kafka | [2024-04-29 23:14:30,193] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 23:16:59 kafka | [2024-04-29 23:14:30,194] INFO starting (kafka.server.KafkaServer) 23:16:59 kafka | [2024-04-29 23:14:30,194] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 23:16:59 kafka | [2024-04-29 23:14:30,207] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 23:16:59 kafka | [2024-04-29 23:14:30,211] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:30,211] INFO Client environment:host.name=8e7de2fb72f7 (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:30,211] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:30,211] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:30,211] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:30,212] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:30,212] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:30,212] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:59 mariadb | 2024-04-29 23:14:28+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:59 mariadb | 2024-04-29 23:14:28+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 23:16:59 mariadb | 2024-04-29 23:14:28+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:59 mariadb | 2024-04-29 23:14:28+00:00 [Note] [Entrypoint]: Initializing database files 23:16:59 mariadb | 2024-04-29 23:14:28 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:59 mariadb | 2024-04-29 23:14:28 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:59 mariadb | 2024-04-29 23:14:28 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:59 mariadb | 23:16:59 mariadb | 23:16:59 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 23:16:59 mariadb | To do so, start the server, then issue the following command: 23:16:59 mariadb | 23:16:59 mariadb | '/usr/bin/mysql_secure_installation' 23:16:59 mariadb | 23:16:59 mariadb | which will also give you the option of removing the test 23:16:59 mariadb | databases and anonymous user created by default. This is 23:16:59 mariadb | strongly recommended for production servers. 23:16:59 mariadb | 23:16:59 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 23:16:59 mariadb | 23:16:59 mariadb | Please report any problems at https://mariadb.org/jira 23:16:59 mariadb | 23:16:59 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 23:16:59 mariadb | 23:16:59 mariadb | Consider joining MariaDB's strong and vibrant community: 23:16:59 mariadb | https://mariadb.org/get-involved/ 23:16:59 mariadb | 23:16:59 mariadb | 2024-04-29 23:14:30+00:00 [Note] [Entrypoint]: Database files initialized 23:16:59 mariadb | 2024-04-29 23:14:30+00:00 [Note] [Entrypoint]: Starting temporary server 23:16:59 mariadb | 2024-04-29 23:14:30+00:00 [Note] [Entrypoint]: Waiting for server startup 23:16:59 mariadb | 2024-04-29 23:14:30 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 100 ... 23:16:59 mariadb | 2024-04-29 23:14:30 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:59 mariadb | 2024-04-29 23:14:30 0 [Note] InnoDB: Number of transaction pools: 1 23:16:59 mariadb | 2024-04-29 23:14:30 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:59 mariadb | 2024-04-29 23:14:30 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:59 mariadb | 2024-04-29 23:14:30 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:59 mariadb | 2024-04-29 23:14:30 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:59 mariadb | 2024-04-29 23:14:30 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:59 mariadb | 2024-04-29 23:14:30 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:59 mariadb | 2024-04-29 23:14:30 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:59 mariadb | 2024-04-29 23:14:30 0 [Note] InnoDB: 128 rollback segments are active. 23:16:59 mariadb | 2024-04-29 23:14:30 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:59 mariadb | 2024-04-29 23:14:30 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:59 mariadb | 2024-04-29 23:14:30 0 [Note] InnoDB: log sequence number 46590; transaction id 14 23:16:59 mariadb | 2024-04-29 23:14:30 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:59 mariadb | 2024-04-29 23:14:30 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:59 mariadb | 2024-04-29 23:14:30 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 23:16:59 mariadb | 2024-04-29 23:14:30 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 23:16:59 mariadb | 2024-04-29 23:14:30 0 [Note] mariadbd: ready for connections. 23:16:59 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 23:16:59 mariadb | 2024-04-29 23:14:31+00:00 [Note] [Entrypoint]: Temporary server started. 23:16:59 mariadb | 2024-04-29 23:14:32+00:00 [Note] [Entrypoint]: Creating user policy_user 23:16:59 mariadb | 2024-04-29 23:14:32+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 23:16:59 mariadb | 23:16:59 mariadb | 2024-04-29 23:14:32+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 23:16:59 mariadb | 23:16:59 mariadb | 2024-04-29 23:14:32+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 23:16:59 mariadb | #!/bin/bash -xv 23:16:59 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 23:16:59 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 23:16:59 mariadb | # 23:16:59 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 23:16:59 mariadb | # you may not use this file except in compliance with the License. 23:16:59 mariadb | # You may obtain a copy of the License at 23:16:59 mariadb | # 23:16:59 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 23:16:59 mariadb | # 23:16:59 mariadb | # Unless required by applicable law or agreed to in writing, software 23:16:59 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 23:16:59 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 23:16:59 mariadb | # See the License for the specific language governing permissions and 23:16:59 mariadb | # limitations under the License. 23:16:59 mariadb | 23:16:59 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:59 mariadb | do 23:16:59 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 23:16:59 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 23:16:59 mariadb | done 23:16:59 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:59 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 23:16:59 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:59 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:59 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 23:16:59 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:59 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:59 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 23:16:59 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:59 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:59 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 23:16:59 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:59 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:59 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 23:16:59 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:59 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:59 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 23:16:59 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:59 mariadb | 23:16:59 kafka | [2024-04-29 23:14:30,212] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:30,212] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:30,212] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:30,212] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:30,212] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:30,212] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:30,212] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:30,212] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:30,212] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:30,212] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:30,214] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@66746f57 (org.apache.zookeeper.ZooKeeper) 23:16:59 kafka | [2024-04-29 23:14:30,218] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:59 kafka | [2024-04-29 23:14:30,224] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:59 kafka | [2024-04-29 23:14:30,226] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 23:16:59 kafka | [2024-04-29 23:14:30,229] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 23:16:59 kafka | [2024-04-29 23:14:30,235] INFO Socket connection established, initiating session, client: /172.17.0.6:35396, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 23:16:59 kafka | [2024-04-29 23:14:30,246] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000003dd5c0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 23:16:59 kafka | [2024-04-29 23:14:30,253] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 23:16:59 kafka | [2024-04-29 23:14:30,535] INFO Cluster ID = 1q8HESR3R-yEc2qak37gtw (kafka.server.KafkaServer) 23:16:59 kafka | [2024-04-29 23:14:30,537] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 23:16:59 kafka | [2024-04-29 23:14:30,587] INFO KafkaConfig values: 23:16:59 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 23:16:59 kafka | alter.config.policy.class.name = null 23:16:59 kafka | alter.log.dirs.replication.quota.window.num = 11 23:16:59 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 23:16:59 kafka | authorizer.class.name = 23:16:59 kafka | auto.create.topics.enable = true 23:16:59 kafka | auto.include.jmx.reporter = true 23:16:59 kafka | auto.leader.rebalance.enable = true 23:16:59 kafka | background.threads = 10 23:16:59 kafka | broker.heartbeat.interval.ms = 2000 23:16:59 kafka | broker.id = 1 23:16:59 kafka | broker.id.generation.enable = true 23:16:59 kafka | broker.rack = null 23:16:59 kafka | broker.session.timeout.ms = 9000 23:16:59 kafka | client.quota.callback.class = null 23:16:59 kafka | compression.type = producer 23:16:59 kafka | connection.failed.authentication.delay.ms = 100 23:16:59 kafka | connections.max.idle.ms = 600000 23:16:59 kafka | connections.max.reauth.ms = 0 23:16:59 kafka | control.plane.listener.name = null 23:16:59 kafka | controlled.shutdown.enable = true 23:16:59 kafka | controlled.shutdown.max.retries = 3 23:16:59 kafka | controlled.shutdown.retry.backoff.ms = 5000 23:16:59 kafka | controller.listener.names = null 23:16:59 kafka | controller.quorum.append.linger.ms = 25 23:16:59 kafka | controller.quorum.election.backoff.max.ms = 1000 23:16:59 kafka | controller.quorum.election.timeout.ms = 1000 23:16:59 kafka | controller.quorum.fetch.timeout.ms = 2000 23:16:59 kafka | controller.quorum.request.timeout.ms = 2000 23:16:59 kafka | controller.quorum.retry.backoff.ms = 20 23:16:59 kafka | controller.quorum.voters = [] 23:16:59 kafka | controller.quota.window.num = 11 23:16:59 kafka | controller.quota.window.size.seconds = 1 23:16:59 kafka | controller.socket.timeout.ms = 30000 23:16:59 kafka | create.topic.policy.class.name = null 23:16:59 kafka | default.replication.factor = 1 23:16:59 kafka | delegation.token.expiry.check.interval.ms = 3600000 23:16:59 kafka | delegation.token.expiry.time.ms = 86400000 23:16:59 kafka | delegation.token.master.key = null 23:16:59 kafka | delegation.token.max.lifetime.ms = 604800000 23:16:59 kafka | delegation.token.secret.key = null 23:16:59 kafka | delete.records.purgatory.purge.interval.requests = 1 23:16:59 kafka | delete.topic.enable = true 23:16:59 kafka | early.start.listeners = null 23:16:59 kafka | fetch.max.bytes = 57671680 23:16:59 kafka | fetch.purgatory.purge.interval.requests = 1000 23:16:59 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 23:16:59 kafka | group.consumer.heartbeat.interval.ms = 5000 23:16:59 kafka | group.consumer.max.heartbeat.interval.ms = 15000 23:16:59 kafka | group.consumer.max.session.timeout.ms = 60000 23:16:59 kafka | group.consumer.max.size = 2147483647 23:16:59 kafka | group.consumer.min.heartbeat.interval.ms = 5000 23:16:59 kafka | group.consumer.min.session.timeout.ms = 45000 23:16:59 kafka | group.consumer.session.timeout.ms = 45000 23:16:59 kafka | group.coordinator.new.enable = false 23:16:59 kafka | group.coordinator.threads = 1 23:16:59 kafka | group.initial.rebalance.delay.ms = 3000 23:16:59 kafka | group.max.session.timeout.ms = 1800000 23:16:59 kafka | group.max.size = 2147483647 23:16:59 kafka | group.min.session.timeout.ms = 6000 23:16:59 kafka | initial.broker.registration.timeout.ms = 60000 23:16:59 kafka | inter.broker.listener.name = PLAINTEXT 23:16:59 kafka | inter.broker.protocol.version = 3.6-IV2 23:16:59 kafka | kafka.metrics.polling.interval.secs = 10 23:16:59 kafka | kafka.metrics.reporters = [] 23:16:59 kafka | leader.imbalance.check.interval.seconds = 300 23:16:59 kafka | leader.imbalance.per.broker.percentage = 10 23:16:59 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 23:16:59 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 23:16:59 kafka | log.cleaner.backoff.ms = 15000 23:16:59 kafka | log.cleaner.dedupe.buffer.size = 134217728 23:16:59 kafka | log.cleaner.delete.retention.ms = 86400000 23:16:59 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 23:16:59 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 23:16:59 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 23:16:59 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 23:16:59 mariadb | 23:16:59 mariadb | 2024-04-29 23:14:33+00:00 [Note] [Entrypoint]: Stopping temporary server 23:16:59 mariadb | 2024-04-29 23:14:33 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 23:16:59 mariadb | 2024-04-29 23:14:33 0 [Note] InnoDB: FTS optimize thread exiting. 23:16:59 mariadb | 2024-04-29 23:14:33 0 [Note] InnoDB: Starting shutdown... 23:16:59 mariadb | 2024-04-29 23:14:33 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 23:16:59 mariadb | 2024-04-29 23:14:33 0 [Note] InnoDB: Buffer pool(s) dump completed at 240429 23:14:33 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Note] InnoDB: Shutdown completed; log sequence number 332453; transaction id 298 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Note] mariadbd: Shutdown complete 23:16:59 mariadb | 23:16:59 mariadb | 2024-04-29 23:14:34+00:00 [Note] [Entrypoint]: Temporary server stopped 23:16:59 mariadb | 23:16:59 mariadb | 2024-04-29 23:14:34+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 23:16:59 mariadb | 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Note] InnoDB: Number of transaction pools: 1 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Note] InnoDB: 128 rollback segments are active. 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Note] InnoDB: log sequence number 332453; transaction id 299 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Note] Server socket created on IP: '0.0.0.0'. 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Note] Server socket created on IP: '::'. 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Note] mariadbd: ready for connections. 23:16:59 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 23:16:59 mariadb | 2024-04-29 23:14:34 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 23:16:59 mariadb | 2024-04-29 23:14:34 0 [Note] InnoDB: Buffer pool(s) load completed at 240429 23:14:34 23:16:59 mariadb | 2024-04-29 23:14:34 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) 23:16:59 mariadb | 2024-04-29 23:14:34 6 [Warning] Aborted connection 6 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) 23:16:59 mariadb | 2024-04-29 23:14:34 18 [Warning] Aborted connection 18 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.38446851Z level=info msg="Executing migration" id="Add index user.login/user.email" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.385243039Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=778.499µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.389659656Z level=info msg="Executing migration" id="Add is_service_account column to user" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.390815359Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.148823ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.394041854Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.402230323Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=8.188059ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.407145936Z level=info msg="Executing migration" id="Add uid column to user" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.407982344Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=836.298µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.411180559Z level=info msg="Executing migration" id="Update uid column values for users" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.411366321Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=185.702µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.415303764Z level=info msg="Executing migration" id="Add unique index user_uid" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.417149983Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.847429ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.421377569Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.421729813Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=352.424µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.427338653Z level=info msg="Executing migration" id="create temp user table v1-7" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.428350144Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.011991ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.432235726Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.433085705Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=849.709µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.437182749Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.437764426Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=581.427µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.443153153Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.443837791Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=684.608µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.447870814Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.448490571Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=619.777µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.452243942Z level=info msg="Executing migration" id="Update temp_user table charset" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.452267912Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=24.86µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.456646359Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.457796262Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.151013ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.46136879Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.463125279Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.756569ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.466621376Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.467308514Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=690.818µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.472220057Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.473313668Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.092981ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.476830276Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.47993485Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.104334ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.483569369Z level=info msg="Executing migration" id="create temp_user v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.484946694Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.377565ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.49019031Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.491265983Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.082652ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.496526988Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.497228197Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=702.959µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.500268269Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.500924276Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=662.087µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.504145821Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.504818698Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=672.367µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.510234856Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.510584931Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=350.305µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.513483332Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.514013238Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=529.466µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.516969429Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 23:16:59 kafka | log.cleaner.enable = true 23:16:59 kafka | log.cleaner.io.buffer.load.factor = 0.9 23:16:59 kafka | log.cleaner.io.buffer.size = 524288 23:16:59 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 23:16:59 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 23:16:59 kafka | log.cleaner.min.cleanable.ratio = 0.5 23:16:59 kafka | log.cleaner.min.compaction.lag.ms = 0 23:16:59 kafka | log.cleaner.threads = 1 23:16:59 kafka | log.cleanup.policy = [delete] 23:16:59 kafka | log.dir = /tmp/kafka-logs 23:16:59 kafka | log.dirs = /var/lib/kafka/data 23:16:59 kafka | log.flush.interval.messages = 9223372036854775807 23:16:59 kafka | log.flush.interval.ms = null 23:16:59 kafka | log.flush.offset.checkpoint.interval.ms = 60000 23:16:59 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 23:16:59 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 23:16:59 kafka | log.index.interval.bytes = 4096 23:16:59 kafka | log.index.size.max.bytes = 10485760 23:16:59 kafka | log.local.retention.bytes = -2 23:16:59 kafka | log.local.retention.ms = -2 23:16:59 kafka | log.message.downconversion.enable = true 23:16:59 kafka | log.message.format.version = 3.0-IV1 23:16:59 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 23:16:59 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 23:16:59 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 23:16:59 kafka | log.message.timestamp.type = CreateTime 23:16:59 kafka | log.preallocate = false 23:16:59 kafka | log.retention.bytes = -1 23:16:59 kafka | log.retention.check.interval.ms = 300000 23:16:59 kafka | log.retention.hours = 168 23:16:59 kafka | log.retention.minutes = null 23:16:59 kafka | log.retention.ms = null 23:16:59 kafka | log.roll.hours = 168 23:16:59 kafka | log.roll.jitter.hours = 0 23:16:59 kafka | log.roll.jitter.ms = null 23:16:59 kafka | log.roll.ms = null 23:16:59 kafka | log.segment.bytes = 1073741824 23:16:59 kafka | log.segment.delete.delay.ms = 60000 23:16:59 kafka | max.connection.creation.rate = 2147483647 23:16:59 kafka | max.connections = 2147483647 23:16:59 kafka | max.connections.per.ip = 2147483647 23:16:59 kafka | max.connections.per.ip.overrides = 23:16:59 kafka | max.incremental.fetch.session.cache.slots = 1000 23:16:59 kafka | message.max.bytes = 1048588 23:16:59 kafka | metadata.log.dir = null 23:16:59 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 23:16:59 kafka | metadata.log.max.snapshot.interval.ms = 3600000 23:16:59 kafka | metadata.log.segment.bytes = 1073741824 23:16:59 kafka | metadata.log.segment.min.bytes = 8388608 23:16:59 kafka | metadata.log.segment.ms = 604800000 23:16:59 kafka | metadata.max.idle.interval.ms = 500 23:16:59 kafka | metadata.max.retention.bytes = 104857600 23:16:59 kafka | metadata.max.retention.ms = 604800000 23:16:59 kafka | metric.reporters = [] 23:16:59 kafka | metrics.num.samples = 2 23:16:59 kafka | metrics.recording.level = INFO 23:16:59 kafka | metrics.sample.window.ms = 30000 23:16:59 kafka | min.insync.replicas = 1 23:16:59 kafka | node.id = 1 23:16:59 kafka | num.io.threads = 8 23:16:59 kafka | num.network.threads = 3 23:16:59 kafka | num.partitions = 1 23:16:59 kafka | num.recovery.threads.per.data.dir = 1 23:16:59 kafka | num.replica.alter.log.dirs.threads = null 23:16:59 kafka | num.replica.fetchers = 1 23:16:59 kafka | offset.metadata.max.bytes = 4096 23:16:59 kafka | offsets.commit.required.acks = -1 23:16:59 kafka | offsets.commit.timeout.ms = 5000 23:16:59 kafka | offsets.load.buffer.size = 5242880 23:16:59 kafka | offsets.retention.check.interval.ms = 600000 23:16:59 kafka | offsets.retention.minutes = 10080 23:16:59 kafka | offsets.topic.compression.codec = 0 23:16:59 kafka | offsets.topic.num.partitions = 50 23:16:59 kafka | offsets.topic.replication.factor = 1 23:16:59 kafka | offsets.topic.segment.bytes = 104857600 23:16:59 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 23:16:59 kafka | password.encoder.iterations = 4096 23:16:59 kafka | password.encoder.key.length = 128 23:16:59 kafka | password.encoder.keyfactory.algorithm = null 23:16:59 kafka | password.encoder.old.secret = null 23:16:59 kafka | password.encoder.secret = null 23:16:59 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 23:16:59 kafka | process.roles = [] 23:16:59 kafka | producer.id.expiration.check.interval.ms = 600000 23:16:59 kafka | producer.id.expiration.ms = 86400000 23:16:59 kafka | producer.purgatory.purge.interval.requests = 1000 23:16:59 kafka | queued.max.request.bytes = -1 23:16:59 kafka | queued.max.requests = 500 23:16:59 kafka | quota.window.num = 11 23:16:59 kafka | quota.window.size.seconds = 1 23:16:59 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 23:16:59 kafka | remote.log.manager.task.interval.ms = 30000 23:16:59 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 23:16:59 kafka | remote.log.manager.task.retry.backoff.ms = 500 23:16:59 kafka | remote.log.manager.task.retry.jitter = 0.2 23:16:59 kafka | remote.log.manager.thread.pool.size = 10 23:16:59 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 23:16:59 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 23:16:59 kafka | remote.log.metadata.manager.class.path = null 23:16:59 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 23:16:59 kafka | remote.log.metadata.manager.listener.name = null 23:16:59 kafka | remote.log.reader.max.pending.tasks = 100 23:16:59 kafka | remote.log.reader.threads = 10 23:16:59 kafka | remote.log.storage.manager.class.name = null 23:16:59 kafka | remote.log.storage.manager.class.path = null 23:16:59 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 23:16:59 kafka | remote.log.storage.system.enable = false 23:16:59 kafka | replica.fetch.backoff.ms = 1000 23:16:59 kafka | replica.fetch.max.bytes = 1048576 23:16:59 kafka | replica.fetch.min.bytes = 1 23:16:59 kafka | replica.fetch.response.max.bytes = 10485760 23:16:59 kafka | replica.fetch.wait.max.ms = 500 23:16:59 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.517312793Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=343.304µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.520552288Z level=info msg="Executing migration" id="create star table" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.521130214Z level=info msg="Migration successfully executed" id="create star table" duration=577.536µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.526587583Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.527184479Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=596.836µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.530289693Z level=info msg="Executing migration" id="create org table v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.530890279Z level=info msg="Migration successfully executed" id="create org table v1" duration=600.576µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.53373853Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.534350167Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=610.387µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.537762803Z level=info msg="Executing migration" id="create org_user table v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.538967527Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.205814ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.544297074Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.545131073Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=831.669µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.548152655Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.549008724Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=856.019µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.551958056Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.552848186Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=890.08µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.555738858Z level=info msg="Executing migration" id="Update org table charset" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.555768678Z level=info msg="Migration successfully executed" id="Update org table charset" duration=30.16µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.561460769Z level=info msg="Executing migration" id="Update org_user table charset" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.56149609Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=35.461µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.565229339Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.565461452Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=230.903µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.568772957Z level=info msg="Executing migration" id="create dashboard table" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.569834939Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.062012ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.573445688Z level=info msg="Executing migration" id="add index dashboard.account_id" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.574341307Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=896.109µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.579649865Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.580310712Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=660.477µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.584132154Z level=info msg="Executing migration" id="create dashboard_tag table" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.58470679Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=574.777µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.587785343Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.589185718Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.400085ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.595139842Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.595931441Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=790.139µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.599260906Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.605564294Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=6.303858ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.611188824Z level=info msg="Executing migration" id="create dashboard v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.611779672Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=591.038µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.617385992Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.618331962Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=947.86µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.621523326Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.622580707Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.056761ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.625586591Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.625889254Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=302.823µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.631691816Z level=info msg="Executing migration" id="drop table dashboard_v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.632352893Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=660.957µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.635229224Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.635293895Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=64.951µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.638410418Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.640841175Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.430057ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.646594696Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.649077954Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=2.487427ms 23:16:59 kafka | replica.lag.time.max.ms = 30000 23:16:59 kafka | replica.selector.class = null 23:16:59 kafka | replica.socket.receive.buffer.bytes = 65536 23:16:59 kafka | replica.socket.timeout.ms = 30000 23:16:59 kafka | replication.quota.window.num = 11 23:16:59 kafka | replication.quota.window.size.seconds = 1 23:16:59 kafka | request.timeout.ms = 30000 23:16:59 kafka | reserved.broker.max.id = 1000 23:16:59 kafka | sasl.client.callback.handler.class = null 23:16:59 kafka | sasl.enabled.mechanisms = [GSSAPI] 23:16:59 kafka | sasl.jaas.config = null 23:16:59 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:59 kafka | sasl.kerberos.min.time.before.relogin = 60000 23:16:59 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 23:16:59 kafka | sasl.kerberos.service.name = null 23:16:59 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:59 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:59 kafka | sasl.login.callback.handler.class = null 23:16:59 kafka | sasl.login.class = null 23:16:59 kafka | sasl.login.connect.timeout.ms = null 23:16:59 kafka | sasl.login.read.timeout.ms = null 23:16:59 kafka | sasl.login.refresh.buffer.seconds = 300 23:16:59 kafka | sasl.login.refresh.min.period.seconds = 60 23:16:59 kafka | sasl.login.refresh.window.factor = 0.8 23:16:59 kafka | sasl.login.refresh.window.jitter = 0.05 23:16:59 kafka | sasl.login.retry.backoff.max.ms = 10000 23:16:59 kafka | sasl.login.retry.backoff.ms = 100 23:16:59 kafka | sasl.mechanism.controller.protocol = GSSAPI 23:16:59 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 23:16:59 kafka | sasl.oauthbearer.clock.skew.seconds = 30 23:16:59 kafka | sasl.oauthbearer.expected.audience = null 23:16:59 kafka | sasl.oauthbearer.expected.issuer = null 23:16:59 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:59 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:59 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:59 kafka | sasl.oauthbearer.jwks.endpoint.url = null 23:16:59 kafka | sasl.oauthbearer.scope.claim.name = scope 23:16:59 kafka | sasl.oauthbearer.sub.claim.name = sub 23:16:59 kafka | sasl.oauthbearer.token.endpoint.url = null 23:16:59 kafka | sasl.server.callback.handler.class = null 23:16:59 kafka | sasl.server.max.receive.size = 524288 23:16:59 kafka | security.inter.broker.protocol = PLAINTEXT 23:16:59 kafka | security.providers = null 23:16:59 kafka | server.max.startup.time.ms = 9223372036854775807 23:16:59 kafka | socket.connection.setup.timeout.max.ms = 30000 23:16:59 kafka | socket.connection.setup.timeout.ms = 10000 23:16:59 kafka | socket.listen.backlog.size = 50 23:16:59 kafka | socket.receive.buffer.bytes = 102400 23:16:59 kafka | socket.request.max.bytes = 104857600 23:16:59 kafka | socket.send.buffer.bytes = 102400 23:16:59 kafka | ssl.cipher.suites = [] 23:16:59 kafka | ssl.client.auth = none 23:16:59 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:59 kafka | ssl.endpoint.identification.algorithm = https 23:16:59 kafka | ssl.engine.factory.class = null 23:16:59 kafka | ssl.key.password = null 23:16:59 kafka | ssl.keymanager.algorithm = SunX509 23:16:59 kafka | ssl.keystore.certificate.chain = null 23:16:59 kafka | ssl.keystore.key = null 23:16:59 kafka | ssl.keystore.location = null 23:16:59 kafka | ssl.keystore.password = null 23:16:59 kafka | ssl.keystore.type = JKS 23:16:59 kafka | ssl.principal.mapping.rules = DEFAULT 23:16:59 kafka | ssl.protocol = TLSv1.3 23:16:59 kafka | ssl.provider = null 23:16:59 kafka | ssl.secure.random.implementation = null 23:16:59 kafka | ssl.trustmanager.algorithm = PKIX 23:16:59 kafka | ssl.truststore.certificates = null 23:16:59 kafka | ssl.truststore.location = null 23:16:59 kafka | ssl.truststore.password = null 23:16:59 kafka | ssl.truststore.type = JKS 23:16:59 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 23:16:59 kafka | transaction.max.timeout.ms = 900000 23:16:59 kafka | transaction.partition.verification.enable = true 23:16:59 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 23:16:59 kafka | transaction.state.log.load.buffer.size = 5242880 23:16:59 kafka | transaction.state.log.min.isr = 2 23:16:59 kafka | transaction.state.log.num.partitions = 50 23:16:59 kafka | transaction.state.log.replication.factor = 3 23:16:59 kafka | transaction.state.log.segment.bytes = 104857600 23:16:59 kafka | transactional.id.expiration.ms = 604800000 23:16:59 kafka | unclean.leader.election.enable = false 23:16:59 kafka | unstable.api.versions.enable = false 23:16:59 kafka | zookeeper.clientCnxnSocket = null 23:16:59 kafka | zookeeper.connect = zookeeper:2181 23:16:59 kafka | zookeeper.connection.timeout.ms = null 23:16:59 kafka | zookeeper.max.in.flight.requests = 10 23:16:59 kafka | zookeeper.metadata.migration.enable = false 23:16:59 kafka | zookeeper.metadata.migration.min.batch.size = 200 23:16:59 kafka | zookeeper.session.timeout.ms = 18000 23:16:59 kafka | zookeeper.set.acl = false 23:16:59 kafka | zookeeper.ssl.cipher.suites = null 23:16:59 kafka | zookeeper.ssl.client.enable = false 23:16:59 kafka | zookeeper.ssl.crl.enable = false 23:16:59 kafka | zookeeper.ssl.enabled.protocols = null 23:16:59 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 23:16:59 kafka | zookeeper.ssl.keystore.location = null 23:16:59 kafka | zookeeper.ssl.keystore.password = null 23:16:59 kafka | zookeeper.ssl.keystore.type = null 23:16:59 kafka | zookeeper.ssl.ocsp.enable = false 23:16:59 kafka | zookeeper.ssl.protocol = TLSv1.2 23:16:59 kafka | zookeeper.ssl.truststore.location = null 23:16:59 kafka | zookeeper.ssl.truststore.password = null 23:16:59 kafka | zookeeper.ssl.truststore.type = null 23:16:59 kafka | (kafka.server.KafkaConfig) 23:16:59 kafka | [2024-04-29 23:14:30,618] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:59 kafka | [2024-04-29 23:14:30,625] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:59 kafka | [2024-04-29 23:14:30,625] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:59 kafka | [2024-04-29 23:14:30,627] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:59 kafka | [2024-04-29 23:14:30,658] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 23:16:59 kafka | [2024-04-29 23:14:30,663] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 23:16:59 kafka | [2024-04-29 23:14:30,673] INFO Loaded 0 logs in 15ms (kafka.log.LogManager) 23:16:59 kafka | [2024-04-29 23:14:30,675] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 23:16:59 kafka | [2024-04-29 23:14:30,676] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 23:16:59 kafka | [2024-04-29 23:14:30,688] INFO Starting the log cleaner (kafka.log.LogCleaner) 23:16:59 kafka | [2024-04-29 23:14:30,734] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 23:16:59 kafka | [2024-04-29 23:14:30,749] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 23:16:59 kafka | [2024-04-29 23:14:30,761] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 23:16:59 kafka | [2024-04-29 23:14:30,803] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:59 kafka | [2024-04-29 23:14:31,137] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:59 kafka | [2024-04-29 23:14:31,155] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 23:16:59 kafka | [2024-04-29 23:14:31,155] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:59 kafka | [2024-04-29 23:14:31,161] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 23:16:59 policy-api | Waiting for mariadb port 3306... 23:16:59 policy-api | mariadb (172.17.0.3:3306) open 23:16:59 policy-api | Waiting for policy-db-migrator port 6824... 23:16:59 policy-api | policy-db-migrator (172.17.0.8:6824) open 23:16:59 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 23:16:59 policy-api | 23:16:59 policy-api | . ____ _ __ _ _ 23:16:59 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:59 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:59 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:59 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:59 policy-api | =========|_|==============|___/=/_/_/_/ 23:16:59 policy-api | :: Spring Boot :: (v3.1.10) 23:16:59 policy-api | 23:16:59 policy-api | [2024-04-29T23:14:41.569+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 23:16:59 policy-api | [2024-04-29T23:14:41.633+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.11 with PID 20 (/app/api.jar started by policy in /opt/app/policy/api/bin) 23:16:59 policy-api | [2024-04-29T23:14:41.634+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:59 policy-api | [2024-04-29T23:14:43.434+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:59 policy-api | [2024-04-29T23:14:43.536+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 92 ms. Found 6 JPA repository interfaces. 23:16:59 policy-api | [2024-04-29T23:14:43.967+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:59 policy-api | [2024-04-29T23:14:43.968+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:59 policy-api | [2024-04-29T23:14:44.579+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:59 policy-api | [2024-04-29T23:14:44.588+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:59 policy-api | [2024-04-29T23:14:44.590+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:59 policy-api | [2024-04-29T23:14:44.590+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 23:16:59 policy-api | [2024-04-29T23:14:44.674+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:59 policy-api | [2024-04-29T23:14:44.674+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2976 ms 23:16:59 policy-api | [2024-04-29T23:14:45.096+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:59 policy-api | [2024-04-29T23:14:45.161+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final 23:16:59 policy-api | [2024-04-29T23:14:45.204+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:59 policy-api | [2024-04-29T23:14:45.469+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:59 policy-api | [2024-04-29T23:14:45.501+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:59 policy-api | [2024-04-29T23:14:45.589+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@7718a40f 23:16:59 policy-api | [2024-04-29T23:14:45.591+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:59 policy-api | [2024-04-29T23:14:47.524+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:59 policy-api | [2024-04-29T23:14:47.527+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:59 policy-api | [2024-04-29T23:14:48.566+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 23:16:59 policy-api | [2024-04-29T23:14:49.351+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 23:16:59 policy-api | [2024-04-29T23:14:50.421+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:59 policy-api | [2024-04-29T23:14:50.616+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@347b27f3, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@7f930614, org.springframework.security.web.context.SecurityContextHolderFilter@4812c244, org.springframework.security.web.header.HeaderWriterFilter@6f54a7be, org.springframework.security.web.authentication.logout.LogoutFilter@5ae50044, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@5aa2168f, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@89537c1, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@8b3ea30, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@6ef0a044, org.springframework.security.web.access.ExceptionTranslationFilter@2f3181d9, org.springframework.security.web.access.intercept.AuthorizationFilter@7d6d93f9] 23:16:59 policy-api | [2024-04-29T23:14:51.422+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:59 policy-api | [2024-04-29T23:14:51.517+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:59 policy-api | [2024-04-29T23:14:51.540+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 23:16:59 policy-api | [2024-04-29T23:14:51.558+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.659 seconds (process running for 11.238) 23:16:59 policy-api | [2024-04-29T23:15:06.980+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:59 policy-api | [2024-04-29T23:15:06.980+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 23:16:59 policy-api | [2024-04-29T23:15:06.981+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 23:16:59 policy-api | [2024-04-29T23:15:07.283+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: 23:16:59 policy-api | [] 23:16:59 policy-apex-pdp | Waiting for mariadb port 3306... 23:16:59 policy-apex-pdp | mariadb (172.17.0.3:3306) open 23:16:59 policy-apex-pdp | Waiting for kafka port 9092... 23:16:59 policy-apex-pdp | kafka (172.17.0.6:9092) open 23:16:59 policy-apex-pdp | Waiting for pap port 6969... 23:16:59 policy-apex-pdp | pap (172.17.0.10:6969) open 23:16:59 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.449+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.606+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:59 policy-apex-pdp | allow.auto.create.topics = true 23:16:59 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:59 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:59 policy-apex-pdp | auto.offset.reset = latest 23:16:59 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:59 policy-apex-pdp | check.crcs = true 23:16:59 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:59 policy-apex-pdp | client.id = consumer-085fa03c-d2d9-404c-b0e2-72bc2e06aca2-1 23:16:59 policy-apex-pdp | client.rack = 23:16:59 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:59 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:59 policy-apex-pdp | enable.auto.commit = true 23:16:59 policy-apex-pdp | exclude.internal.topics = true 23:16:59 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:59 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:59 policy-apex-pdp | fetch.min.bytes = 1 23:16:59 policy-apex-pdp | group.id = 085fa03c-d2d9-404c-b0e2-72bc2e06aca2 23:16:59 policy-apex-pdp | group.instance.id = null 23:16:59 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:59 policy-apex-pdp | interceptor.classes = [] 23:16:59 policy-apex-pdp | internal.leave.group.on.close = true 23:16:59 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:59 policy-apex-pdp | isolation.level = read_uncommitted 23:16:59 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:59 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:59 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:59 policy-apex-pdp | max.poll.records = 500 23:16:59 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:59 policy-apex-pdp | metric.reporters = [] 23:16:59 policy-apex-pdp | metrics.num.samples = 2 23:16:59 policy-apex-pdp | metrics.recording.level = INFO 23:16:59 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:59 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:59 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:59 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:59 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:59 policy-apex-pdp | request.timeout.ms = 30000 23:16:59 policy-apex-pdp | retry.backoff.ms = 100 23:16:59 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:59 policy-apex-pdp | sasl.jaas.config = null 23:16:59 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:59 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:59 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:59 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:59 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:59 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:59 policy-apex-pdp | sasl.login.class = null 23:16:59 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:59 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:59 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.652267068Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.653945116Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.678768ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.657967539Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.658624106Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=656.667µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.663907533Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.665776013Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.86838ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.66922424Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.6701538Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=929.03µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.674120363Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.674943322Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=827.459µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.681017367Z level=info msg="Executing migration" id="Update dashboard table charset" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.681041928Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=25.681µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.68407636Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.684126501Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=48.911µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.686562227Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.688464888Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.902371ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.691540601Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.693485063Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.945201ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.699249304Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.700663599Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.414195ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.703679592Z level=info msg="Executing migration" id="Add column uid in dashboard" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.705710184Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.029903ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.709157731Z level=info msg="Executing migration" id="Update uid column values in dashboard" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.709423183Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=266.042µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.714465908Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.715341998Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=870.85µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.718439961Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.71924201Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=801.929µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.722240742Z level=info msg="Executing migration" id="Update dashboard title length" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.722312313Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=71.621µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.728155586Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.729212008Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.055542ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.732308081Z level=info msg="Executing migration" id="create dashboard_provisioning" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.732964398Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=656.897µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.736505136Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.741358848Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=4.849252ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.746639935Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.747251412Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=615.347µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.750055472Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.75077667Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=720.978µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.753495929Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.754206436Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=710.427µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.757014817Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.757323671Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=308.814µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.762025351Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.762575106Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=549.965µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.765563969Z level=info msg="Executing migration" id="Add check_sum column" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.76756315Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.997931ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.770583123Z level=info msg="Executing migration" id="Add index for dashboard_title" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.771299451Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=715.758µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.777097084Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.777260256Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=162.542µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.780296088Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.78045273Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=154.702µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.783185399Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.783896667Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=710.468µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.789554818Z level=info msg="Executing migration" id="Add isPublic for dashboard" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.791522579Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.967491ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.7953101Z level=info msg="Executing migration" id="create data_source table" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.796317991Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.007371ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.799285503Z level=info msg="Executing migration" id="add index data_source.account_id" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.800040381Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=755.078µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.805756273Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.806573191Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=816.538µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.809612014Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.810429393Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=817.189µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.813319074Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.813956571Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=636.787µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.819023726Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.823773047Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=4.74919ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.828197715Z level=info msg="Executing migration" id="create data_source table v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.830192877Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.988501ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.83427344Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.835370641Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.123031ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.840546357Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.841380007Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=833.37µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.844826324Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.84543995Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=611.306µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.851027311Z level=info msg="Executing migration" id="Add column with_credentials" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.855059414Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=4.032052ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.86023294Z level=info msg="Executing migration" id="Add secure json data column" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.862637766Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.404966ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.865891061Z level=info msg="Executing migration" id="Update data_source table charset" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.865938301Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=51.36µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.87049517Z level=info msg="Executing migration" id="Update initial version to 1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.870715073Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=218.673µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.876234782Z level=info msg="Executing migration" id="Add read_only data column" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.880261125Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=4.025973ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.883995886Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.8843927Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=397.744µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.88893211Z level=info msg="Executing migration" id="Update json_data with nulls" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.889129202Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=195.182µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.892106654Z level=info msg="Executing migration" id="Add uid column" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.895675312Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.567998ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.902454114Z level=info msg="Executing migration" id="Update uid value" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.902711367Z level=info msg="Migration successfully executed" id="Update uid value" duration=257.033µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.905849772Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.90659822Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=748.198µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.909568292Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.910238599Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=669.497µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.915384644Z level=info msg="Executing migration" id="create api_key table" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.916183153Z level=info msg="Migration successfully executed" id="create api_key table" duration=798.599µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.919761362Z level=info msg="Executing migration" id="add index api_key.account_id" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.92054295Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=781.368µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.92326601Z level=info msg="Executing migration" id="add index api_key.key" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.923941607Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=675.407µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.928120752Z level=info msg="Executing migration" id="add index api_key.account_id_name" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.92885369Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=732.688µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.931984954Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.932712831Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=726.028µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.937138219Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.937693415Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=555.806µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.940744218Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.941264053Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=519.995µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.944309726Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.951590305Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=7.278289ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.955710349Z level=info msg="Executing migration" id="create api_key table v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.956493888Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=783.139µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.958892214Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.959684232Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=791.688µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.962664854Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.963459233Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=794.118µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.967365425Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.968217984Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=852.199µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.971922954Z level=info msg="Executing migration" id="copy api_key v1 to v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.97249059Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=567.845µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.975842296Z level=info msg="Executing migration" id="Drop old table api_key_v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.976753856Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=908.28µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.980772179Z level=info msg="Executing migration" id="Update api_key table charset" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.980800489Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=28.26µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.983875152Z level=info msg="Executing migration" id="Add expires to api_key table" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.986365909Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.490157ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.989636335Z level=info msg="Executing migration" id="Add service account foreign key" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.992197102Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.560697ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.996126895Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.996314847Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=187.562µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:27.999763844Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.00223411Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.469766ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.005338724Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.007839457Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.500153ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.012051635Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.013446623Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=1.395297ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.016953059Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.017870082Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=917.453µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.021232736Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.022127468Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=894.842µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.025951808Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.02686954Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=916.072µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.029712378Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.030615159Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=901.212µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.033585608Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.0344605Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=874.442µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.03833313Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.038396481Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=62.321µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.041452621Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.041483742Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=33.691µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.04444173Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 23:16:59 kafka | [2024-04-29 23:14:31,169] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:59 kafka | [2024-04-29 23:14:31,187] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:59 kafka | [2024-04-29 23:14:31,188] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:59 kafka | [2024-04-29 23:14:31,190] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:59 kafka | [2024-04-29 23:14:31,191] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:59 kafka | [2024-04-29 23:14:31,194] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:59 kafka | [2024-04-29 23:14:31,209] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 23:16:59 kafka | [2024-04-29 23:14:31,210] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 23:16:59 kafka | [2024-04-29 23:14:31,232] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 23:16:59 kafka | [2024-04-29 23:14:31,251] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1714432471243,1714432471243,1,0,0,72057610636623873,258,0,27 23:16:59 kafka | (kafka.zk.KafkaZkClient) 23:16:59 kafka | [2024-04-29 23:14:31,252] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 23:16:59 kafka | [2024-04-29 23:14:31,298] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 23:16:59 kafka | [2024-04-29 23:14:31,305] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:59 kafka | [2024-04-29 23:14:31,311] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:59 kafka | [2024-04-29 23:14:31,323] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:59 kafka | [2024-04-29 23:14:31,326] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 23:16:59 kafka | [2024-04-29 23:14:31,338] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:14:31,342] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:14:31,344] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 23:16:59 kafka | [2024-04-29 23:14:31,346] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 23:16:59 kafka | [2024-04-29 23:14:31,356] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 23:16:59 kafka | [2024-04-29 23:14:31,382] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:59 kafka | [2024-04-29 23:14:31,385] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 23:16:59 kafka | [2024-04-29 23:14:31,386] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:59 kafka | [2024-04-29 23:14:31,392] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:14:31,392] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 23:16:59 kafka | [2024-04-29 23:14:31,396] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:14:31,400] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:14:31,402] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:14:31,417] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:14:31,422] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:14:31,430] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 23:16:59 kafka | [2024-04-29 23:14:31,430] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:59 kafka | [2024-04-29 23:14:31,442] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 23:16:59 kafka | [2024-04-29 23:14:31,443] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:14:31,444] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:14:31,444] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:14:31,444] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:14:31,447] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:14:31,447] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:14:31,448] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:14:31,448] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 23:16:59 kafka | [2024-04-29 23:14:31,449] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:14:31,452] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 23:16:59 kafka | [2024-04-29 23:14:31,460] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 23:16:59 kafka | [2024-04-29 23:14:31,460] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 23:16:59 kafka | [2024-04-29 23:14:31,462] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:59 kafka | [2024-04-29 23:14:31,467] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:59 kafka | [2024-04-29 23:14:31,469] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 23:16:59 kafka | [2024-04-29 23:14:31,469] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 23:16:59 kafka | [2024-04-29 23:14:31,470] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 23:16:59 kafka | [2024-04-29 23:14:31,476] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 23:16:59 kafka | [2024-04-29 23:14:31,477] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 23:16:59 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:59 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:59 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:59 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:59 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:59 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:59 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:59 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:59 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:59 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:59 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:59 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:59 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:59 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:59 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:59 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:59 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:59 policy-apex-pdp | security.providers = null 23:16:59 policy-apex-pdp | send.buffer.bytes = 131072 23:16:59 policy-apex-pdp | session.timeout.ms = 45000 23:16:59 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:59 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:59 policy-apex-pdp | ssl.cipher.suites = null 23:16:59 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:59 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:59 policy-apex-pdp | ssl.engine.factory.class = null 23:16:59 policy-apex-pdp | ssl.key.password = null 23:16:59 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:59 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:59 policy-apex-pdp | ssl.keystore.key = null 23:16:59 policy-apex-pdp | ssl.keystore.location = null 23:16:59 policy-apex-pdp | ssl.keystore.password = null 23:16:59 policy-apex-pdp | ssl.keystore.type = JKS 23:16:59 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:59 policy-apex-pdp | ssl.provider = null 23:16:59 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:59 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:59 policy-apex-pdp | ssl.truststore.certificates = null 23:16:59 policy-db-migrator | Waiting for mariadb port 3306... 23:16:59 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:16:59 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:16:59 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:16:59 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:16:59 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:16:59 policy-db-migrator | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! 23:16:59 policy-db-migrator | 321 blocks 23:16:59 policy-db-migrator | Preparing upgrade release version: 0800 23:16:59 policy-db-migrator | Preparing upgrade release version: 0900 23:16:59 policy-db-migrator | Preparing upgrade release version: 1000 23:16:59 policy-db-migrator | Preparing upgrade release version: 1100 23:16:59 policy-db-migrator | Preparing upgrade release version: 1200 23:16:59 policy-db-migrator | Preparing upgrade release version: 1300 23:16:59 policy-db-migrator | Done 23:16:59 policy-db-migrator | name version 23:16:59 policy-db-migrator | policyadmin 0 23:16:59 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 23:16:59 policy-db-migrator | upgrade: 0 -> 1300 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-apex-pdp | ssl.truststore.location = null 23:16:59 policy-apex-pdp | ssl.truststore.password = null 23:16:59 policy-apex-pdp | ssl.truststore.type = JKS 23:16:59 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:59 policy-apex-pdp | 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.795+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.795+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.796+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714432503794 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.798+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-085fa03c-d2d9-404c-b0e2-72bc2e06aca2-1, groupId=085fa03c-d2d9-404c-b0e2-72bc2e06aca2] Subscribed to topic(s): policy-pdp-pap 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.809+00:00|INFO|ServiceManager|main] service manager starting 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.810+00:00|INFO|ServiceManager|main] service manager starting topics 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.811+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=085fa03c-d2d9-404c-b0e2-72bc2e06aca2, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.831+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:59 policy-apex-pdp | allow.auto.create.topics = true 23:16:59 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:59 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:59 policy-apex-pdp | auto.offset.reset = latest 23:16:59 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:59 policy-apex-pdp | check.crcs = true 23:16:59 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:59 policy-apex-pdp | client.id = consumer-085fa03c-d2d9-404c-b0e2-72bc2e06aca2-2 23:16:59 policy-apex-pdp | client.rack = 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.047126306Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.684746ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.050122145Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.052862161Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.740066ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.056904414Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.056994336Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=89.141µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.060193257Z level=info msg="Executing migration" id="create quota table v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.060990707Z level=info msg="Migration successfully executed" id="create quota table v1" duration=797.43µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.064391662Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.065622529Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.231287ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.069832433Z level=info msg="Executing migration" id="Update quota table charset" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.069869984Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=38.551µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.073107456Z level=info msg="Executing migration" id="create plugin_setting table" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.073859657Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=752.141µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.07718005Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.07799833Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=817.98µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.081298094Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.085938145Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.639562ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.090029358Z level=info msg="Executing migration" id="Update plugin_setting table charset" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.090053808Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=24.84µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.093329172Z level=info msg="Executing migration" id="create session table" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.094147702Z level=info msg="Migration successfully executed" id="create session table" duration=817.56µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.097269623Z level=info msg="Executing migration" id="Drop old table playlist table" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.097349974Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=80.871µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.10161202Z level=info msg="Executing migration" id="Drop old table playlist_item table" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.101751732Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=140.122µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.104780442Z level=info msg="Executing migration" id="create playlist table v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.105480181Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=699.489µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.108852075Z level=info msg="Executing migration" id="create playlist item table v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.110040082Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.189377ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.114216346Z level=info msg="Executing migration" id="Update playlist table charset" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.114284587Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=69.591µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.11758746Z level=info msg="Executing migration" id="Update playlist_item table charset" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.117609261Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=24.961µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.120985755Z level=info msg="Executing migration" id="Add playlist column created_at" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.123873843Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.887558ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.127060806Z level=info msg="Executing migration" id="Add playlist column updated_at" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.129962544Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.901668ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.134554584Z level=info msg="Executing migration" id="drop preferences table v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.134635055Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=80.991µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.137612894Z level=info msg="Executing migration" id="drop preferences table v3" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.137687465Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=75.661µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.140635604Z level=info msg="Executing migration" id="create preferences table v3" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.141424515Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=788.601µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.145045942Z level=info msg="Executing migration" id="Update preferences table charset" 23:16:59 kafka | [2024-04-29 23:14:31,478] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:14:31,482] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 23:16:59 kafka | [2024-04-29 23:14:31,484] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:14:31,485] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:14:31,485] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:14:31,485] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:14:31,487] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:14:31,491] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 23:16:59 kafka | [2024-04-29 23:14:31,494] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.6:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 23:16:59 kafka | [2024-04-29 23:14:31,497] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:14:31,498] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 23:16:59 kafka | [2024-04-29 23:14:31,499] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) 23:16:59 kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. 23:16:59 kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 23:16:59 kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) 23:16:59 kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) 23:16:59 kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) 23:16:59 kafka | [2024-04-29 23:14:31,502] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) 23:16:59 kafka | [2024-04-29 23:14:31,519] INFO Kafka version: 7.6.1-ccs (org.apache.kafka.common.utils.AppInfoParser) 23:16:59 kafka | [2024-04-29 23:14:31,519] INFO Kafka commitId: 11e81ad2a49db00b1d2b8c731409cd09e563de67 (org.apache.kafka.common.utils.AppInfoParser) 23:16:59 kafka | [2024-04-29 23:14:31,519] INFO Kafka startTimeMs: 1714432471512 (org.apache.kafka.common.utils.AppInfoParser) 23:16:59 kafka | [2024-04-29 23:14:31,521] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 23:16:59 kafka | [2024-04-29 23:14:31,605] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 23:16:59 kafka | [2024-04-29 23:14:31,710] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:59 kafka | [2024-04-29 23:14:31,714] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:59 kafka | [2024-04-29 23:14:31,715] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:59 kafka | [2024-04-29 23:14:36,498] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:14:36,499] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:15:02,796] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:15:02,806] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:59 kafka | [2024-04-29 23:15:02,807] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:59 kafka | [2024-04-29 23:15:02,819] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.145071283Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=26.181µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.147830149Z level=info msg="Executing migration" id="Add column team_id in preferences" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.150985151Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.154802ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.15396471Z level=info msg="Executing migration" id="Update team_id column values in preferences" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.154119052Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=153.242µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.157163222Z level=info msg="Executing migration" id="Add column week_start in preferences" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.160304313Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.140761ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.165993518Z level=info msg="Executing migration" id="Add column preferences.json_data" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.169138541Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.144743ms 23:16:59 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:59 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:59 policy-apex-pdp | enable.auto.commit = true 23:16:59 policy-apex-pdp | exclude.internal.topics = true 23:16:59 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:59 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:59 policy-apex-pdp | fetch.min.bytes = 1 23:16:59 policy-apex-pdp | group.id = 085fa03c-d2d9-404c-b0e2-72bc2e06aca2 23:16:59 policy-apex-pdp | group.instance.id = null 23:16:59 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:59 policy-apex-pdp | interceptor.classes = [] 23:16:59 policy-apex-pdp | internal.leave.group.on.close = true 23:16:59 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:59 policy-apex-pdp | isolation.level = read_uncommitted 23:16:59 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:59 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:59 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:59 policy-apex-pdp | max.poll.records = 500 23:16:59 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:59 policy-apex-pdp | metric.reporters = [] 23:16:59 policy-apex-pdp | metrics.num.samples = 2 23:16:59 policy-apex-pdp | metrics.recording.level = INFO 23:16:59 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:59 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:59 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:59 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:59 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:59 policy-apex-pdp | request.timeout.ms = 30000 23:16:59 policy-apex-pdp | retry.backoff.ms = 100 23:16:59 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:59 policy-apex-pdp | sasl.jaas.config = null 23:16:59 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:59 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:59 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:59 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:59 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:59 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:59 policy-apex-pdp | sasl.login.class = null 23:16:59 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:59 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:59 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:59 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:59 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:59 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:59 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:59 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:59 simulator | overriding logback.xml 23:16:59 simulator | 2024-04-29 23:14:32,739 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:59 simulator | 2024-04-29 23:14:32,806 INFO org.onap.policy.models.simulators starting 23:16:59 simulator | 2024-04-29 23:14:32,806 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 23:16:59 simulator | 2024-04-29 23:14:33,001 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 23:16:59 simulator | 2024-04-29 23:14:33,003 INFO org.onap.policy.models.simulators starting A&AI simulator 23:16:59 simulator | 2024-04-29 23:14:33,105 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:59 simulator | 2024-04-29 23:14:33,115 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:59 simulator | 2024-04-29 23:14:33,117 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:59 simulator | 2024-04-29 23:14:33,124 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 23:16:59 simulator | 2024-04-29 23:14:33,180 INFO Session workerName=node0 23:16:59 simulator | 2024-04-29 23:14:33,639 INFO Using GSON for REST calls 23:16:59 simulator | 2024-04-29 23:14:33,720 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} 23:16:59 simulator | 2024-04-29 23:14:33,727 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 23:16:59 simulator | 2024-04-29 23:14:33,733 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1433ms 23:16:59 simulator | 2024-04-29 23:14:33,733 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4384 ms. 23:16:59 simulator | 2024-04-29 23:14:33,741 INFO org.onap.policy.models.simulators starting SDNC simulator 23:16:59 simulator | 2024-04-29 23:14:33,744 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:59 simulator | 2024-04-29 23:14:33,744 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:59 simulator | 2024-04-29 23:14:33,746 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:59 simulator | 2024-04-29 23:14:33,747 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 23:16:59 simulator | 2024-04-29 23:14:33,751 INFO Session workerName=node0 23:16:59 simulator | 2024-04-29 23:14:33,810 INFO Using GSON for REST calls 23:16:59 simulator | 2024-04-29 23:14:33,821 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} 23:16:59 simulator | 2024-04-29 23:14:33,823 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 23:16:59 simulator | 2024-04-29 23:14:33,823 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @1523ms 23:16:59 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:59 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:59 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:59 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:59 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:59 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:59 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:59 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:59 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:59 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:59 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:59 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:59 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:59 policy-apex-pdp | security.providers = null 23:16:59 policy-apex-pdp | send.buffer.bytes = 131072 23:16:59 policy-apex-pdp | session.timeout.ms = 45000 23:16:59 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:59 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:59 policy-apex-pdp | ssl.cipher.suites = null 23:16:59 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:59 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:59 policy-apex-pdp | ssl.engine.factory.class = null 23:16:59 policy-apex-pdp | ssl.key.password = null 23:16:59 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:59 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:59 policy-apex-pdp | ssl.keystore.key = null 23:16:59 policy-apex-pdp | ssl.keystore.location = null 23:16:59 policy-apex-pdp | ssl.keystore.password = null 23:16:59 policy-apex-pdp | ssl.keystore.type = JKS 23:16:59 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:59 policy-apex-pdp | ssl.provider = null 23:16:59 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:59 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:59 policy-apex-pdp | ssl.truststore.certificates = null 23:16:59 policy-apex-pdp | ssl.truststore.location = null 23:16:59 policy-apex-pdp | ssl.truststore.password = null 23:16:59 policy-apex-pdp | ssl.truststore.type = JKS 23:16:59 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:59 policy-apex-pdp | 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.839+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.839+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.839+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714432503839 23:16:59 prometheus | ts=2024-04-29T23:14:26.265Z caller=main.go:573 level=info msg="No time or size retention was set so using the default time retention" duration=15d 23:16:59 prometheus | ts=2024-04-29T23:14:26.265Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.2, branch=HEAD, revision=b4c0ab52c3e9b940ab803581ddae9b3d9a452337)" 23:16:59 prometheus | ts=2024-04-29T23:14:26.265Z caller=main.go:622 level=info build_context="(go=go1.22.2, platform=linux/amd64, user=root@b63f02a423d9, date=20240410-14:05:54, tags=netgo,builtinassets,stringlabels)" 23:16:59 prometheus | ts=2024-04-29T23:14:26.265Z caller=main.go:623 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 23:16:59 prometheus | ts=2024-04-29T23:14:26.265Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 23:16:59 prometheus | ts=2024-04-29T23:14:26.265Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 23:16:59 prometheus | ts=2024-04-29T23:14:26.269Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 23:16:59 prometheus | ts=2024-04-29T23:14:26.270Z caller=main.go:1129 level=info msg="Starting TSDB ..." 23:16:59 prometheus | ts=2024-04-29T23:14:26.271Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 23:16:59 prometheus | ts=2024-04-29T23:14:26.271Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 23:16:59 prometheus | ts=2024-04-29T23:14:26.273Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 23:16:59 prometheus | ts=2024-04-29T23:14:26.273Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.73µs 23:16:59 prometheus | ts=2024-04-29T23:14:26.273Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 23:16:59 prometheus | ts=2024-04-29T23:14:26.273Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 23:16:59 prometheus | ts=2024-04-29T23:14:26.273Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=17.57µs wal_replay_duration=276.253µs wbl_replay_duration=190ns total_replay_duration=311.743µs 23:16:59 prometheus | ts=2024-04-29T23:14:26.276Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 23:16:59 prometheus | ts=2024-04-29T23:14:26.276Z caller=main.go:1153 level=info msg="TSDB started" 23:16:59 prometheus | ts=2024-04-29T23:14:26.276Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 23:16:59 prometheus | ts=2024-04-29T23:14:26.281Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=5.207266ms db_storage=2.84µs remote_storage=2.25µs web_handler=980ns query_engine=1.95µs scrape=433.365µs scrape_sd=187.562µs notify=38.06µs notify_sd=12.79µs rules=2.67µs tracing=6.21µs 23:16:59 prometheus | ts=2024-04-29T23:14:26.281Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 23:16:59 prometheus | ts=2024-04-29T23:14:26.281Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.839+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-085fa03c-d2d9-404c-b0e2-72bc2e06aca2-2, groupId=085fa03c-d2d9-404c-b0e2-72bc2e06aca2] Subscribed to topic(s): policy-pdp-pap 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.840+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=092d5fbd-fc04-428f-ab39-10ea6ce35767, alive=false, publisher=null]]: starting 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.850+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:59 policy-apex-pdp | acks = -1 23:16:59 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:59 policy-apex-pdp | batch.size = 16384 23:16:59 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:59 policy-apex-pdp | buffer.memory = 33554432 23:16:59 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:59 policy-apex-pdp | client.id = producer-1 23:16:59 policy-apex-pdp | compression.type = none 23:16:59 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:59 policy-apex-pdp | delivery.timeout.ms = 120000 23:16:59 policy-apex-pdp | enable.idempotence = true 23:16:59 policy-apex-pdp | interceptor.classes = [] 23:16:59 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:59 policy-apex-pdp | linger.ms = 0 23:16:59 policy-apex-pdp | max.block.ms = 60000 23:16:59 policy-apex-pdp | max.in.flight.requests.per.connection = 5 23:16:59 policy-apex-pdp | max.request.size = 1048576 23:16:59 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:59 policy-apex-pdp | metadata.max.idle.ms = 300000 23:16:59 policy-apex-pdp | metric.reporters = [] 23:16:59 policy-apex-pdp | metrics.num.samples = 2 23:16:59 policy-apex-pdp | metrics.recording.level = INFO 23:16:59 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:59 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 23:16:59 policy-apex-pdp | partitioner.availability.timeout.ms = 0 23:16:59 policy-apex-pdp | partitioner.class = null 23:16:59 policy-apex-pdp | partitioner.ignore.keys = false 23:16:59 policy-apex-pdp | receive.buffer.bytes = 32768 23:16:59 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:59 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:59 policy-apex-pdp | request.timeout.ms = 30000 23:16:59 policy-apex-pdp | retries = 2147483647 23:16:59 policy-apex-pdp | retry.backoff.ms = 100 23:16:59 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:59 policy-apex-pdp | sasl.jaas.config = null 23:16:59 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:59 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:59 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:59 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:59 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:59 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:59 policy-apex-pdp | sasl.login.class = null 23:16:59 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:59 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:59 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:59 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:59 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:59 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:59 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:59 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.172047649Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.17212589Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=78.491µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.175081309Z level=info msg="Executing migration" id="Add preferences index org_id" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.17596845Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=887.061µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.183244726Z level=info msg="Executing migration" id="Add preferences index user_id" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.184041646Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=796.94µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.187762965Z level=info msg="Executing migration" id="create alert table v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.18963646Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.873945ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.193035475Z level=info msg="Executing migration" id="add index alert org_id & id " 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.193909586Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=874.111µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.19791613Z level=info msg="Executing migration" id="add index alert state" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.19871895Z level=info msg="Migration successfully executed" id="add index alert state" duration=802.81µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.202809334Z level=info msg="Executing migration" id="add index alert dashboard_id" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.203752396Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=942.162µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.207430675Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.208494159Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.063295ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.212603323Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.213443704Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=840.301µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.216660067Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.217433417Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=772.43µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.220455226Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.229659487Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=9.204001ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.233241295Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.233736612Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=495.438µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.236283735Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.236887663Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=603.698µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.239641729Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.239910763Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=269.524µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.243585311Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.244109918Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=523.047µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.2465488Z level=info msg="Executing migration" id="create alert_notification table v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.247753936Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.204696ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.251111201Z level=info msg="Executing migration" id="Add column is_default" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.256728895Z level=info msg="Migration successfully executed" id="Add column is_default" duration=5.618584ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.260606045Z level=info msg="Executing migration" id="Add column frequency" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.264114932Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.509067ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.26694787Z level=info msg="Executing migration" id="Add column send_reminder" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.270408165Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.460045ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.273197912Z level=info msg="Executing migration" id="Add column disable_resolve_message" 23:16:59 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:59 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:59 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:59 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:59 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:59 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:59 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:59 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:59 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:59 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:59 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:59 simulator | 2024-04-29 23:14:33,823 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4923 ms. 23:16:59 simulator | 2024-04-29 23:14:33,824 INFO org.onap.policy.models.simulators starting SO simulator 23:16:59 simulator | 2024-04-29 23:14:33,826 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:59 simulator | 2024-04-29 23:14:33,826 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:59 simulator | 2024-04-29 23:14:33,826 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:59 simulator | 2024-04-29 23:14:33,827 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 23:16:59 simulator | 2024-04-29 23:14:33,838 INFO Session workerName=node0 23:16:59 simulator | 2024-04-29 23:14:33,889 INFO Using GSON for REST calls 23:16:59 simulator | 2024-04-29 23:14:33,905 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} 23:16:59 simulator | 2024-04-29 23:14:33,906 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 23:16:59 simulator | 2024-04-29 23:14:33,907 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @1607ms 23:16:59 simulator | 2024-04-29 23:14:33,907 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4919 ms. 23:16:59 simulator | 2024-04-29 23:14:33,908 INFO org.onap.policy.models.simulators starting VFC simulator 23:16:59 simulator | 2024-04-29 23:14:33,911 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:59 simulator | 2024-04-29 23:14:33,911 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:59 simulator | 2024-04-29 23:14:33,914 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:59 simulator | 2024-04-29 23:14:33,915 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 23:16:59 simulator | 2024-04-29 23:14:33,921 INFO Session workerName=node0 23:16:59 simulator | 2024-04-29 23:14:33,986 INFO Using GSON for REST calls 23:16:59 simulator | 2024-04-29 23:14:33,995 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} 23:16:59 simulator | 2024-04-29 23:14:33,997 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 23:16:59 simulator | 2024-04-29 23:14:33,997 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @1697ms 23:16:59 kafka | [2024-04-29 23:15:02,864] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(IWsOBm1GS4OdGGC-w1lwlg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(_u5Y4Qn_TSSHRzz95FvL9Q),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:15:02,865] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 23:16:59 kafka | [2024-04-29 23:15:02,867] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,867] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,867] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,867] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,867] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,867] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,867] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,867] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,867] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,867] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,867] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,867] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,867] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,867] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,867] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,868] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,868] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,868] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,868] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 simulator | 2024-04-29 23:14:33,997 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4917 ms. 23:16:59 simulator | 2024-04-29 23:14:33,998 INFO org.onap.policy.models.simulators started 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.276809079Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.610337ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.283370215Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.284918346Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.548051ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.288660205Z level=info msg="Executing migration" id="Update alert table charset" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.288687115Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=28.33µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.291875597Z level=info msg="Executing migration" id="Update alert_notification table charset" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.291910288Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=39.051µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.296853113Z level=info msg="Executing migration" id="create notification_journal table v1" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.297899568Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.047375ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.305004441Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.306582782Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.578741ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.310298121Z level=info msg="Executing migration" id="drop alert_notification_journal" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.311566497Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.269556ms 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 23:16:59 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:59 policy-apex-pdp | security.providers = null 23:16:59 policy-apex-pdp | send.buffer.bytes = 131072 23:16:59 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:59 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:59 policy-apex-pdp | ssl.cipher.suites = null 23:16:59 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:59 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:59 policy-apex-pdp | ssl.engine.factory.class = null 23:16:59 policy-apex-pdp | ssl.key.password = null 23:16:59 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:59 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:59 policy-apex-pdp | ssl.keystore.key = null 23:16:59 policy-apex-pdp | ssl.keystore.location = null 23:16:59 policy-apex-pdp | ssl.keystore.password = null 23:16:59 policy-apex-pdp | ssl.keystore.type = JKS 23:16:59 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:59 policy-apex-pdp | ssl.provider = null 23:16:59 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:59 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:59 policy-apex-pdp | ssl.truststore.certificates = null 23:16:59 policy-apex-pdp | ssl.truststore.location = null 23:16:59 policy-apex-pdp | ssl.truststore.password = null 23:16:59 policy-apex-pdp | ssl.truststore.type = JKS 23:16:59 policy-apex-pdp | transaction.timeout.ms = 60000 23:16:59 policy-apex-pdp | transactional.id = null 23:16:59 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:59 policy-apex-pdp | 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.859+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.874+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.874+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.874+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714432503874 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.875+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=092d5fbd-fc04-428f-ab39-10ea6ce35767, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.875+00:00|INFO|ServiceManager|main] service manager starting set alive 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.875+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.877+00:00|INFO|ServiceManager|main] service manager starting topic sinks 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.877+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.878+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.878+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.879+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.879+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=085fa03c-d2d9-404c-b0e2-72bc2e06aca2, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@60a2630a 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.879+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=085fa03c-d2d9-404c-b0e2-72bc2e06aca2, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.879+00:00|INFO|ServiceManager|main] service manager starting Create REST server 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.895+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 23:16:59 policy-apex-pdp | [] 23:16:59 kafka | [2024-04-29 23:15:02,868] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,868] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,868] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,868] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,868] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,868] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,868] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,868] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,868] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,868] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,868] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,868] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,868] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,868] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,868] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,868] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,868] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,868] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,869] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,869] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,869] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,869] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,869] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,869] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,869] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,869] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,869] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,869] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,869] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,869] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,869] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,869] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,869] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,876] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,876] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,876] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,876] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,876] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,876] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,876] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:02,876] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:59 zookeeper | ===> User 23:16:59 policy-apex-pdp | [2024-04-29T23:15:03.898+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:59 kafka | [2024-04-29 23:15:02,876] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | Waiting for mariadb port 3306... 23:16:59 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:59 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"fb45a407-2dbe-4881-b530-9938300209cf","timestampMs":1714432503878,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup"} 23:16:59 kafka | [2024-04-29 23:15:02,876] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | mariadb (172.17.0.3:3306) open 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.317758199Z level=info msg="Executing migration" id="create alert_notification_state table v1" 23:16:59 zookeeper | ===> Configuring ... 23:16:59 policy-apex-pdp | [2024-04-29T23:15:04.054+00:00|INFO|ServiceManager|main] service manager starting Rest Server 23:16:59 kafka | [2024-04-29 23:15:02,878] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | Waiting for kafka port 9092... 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.318654561Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=896.882µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.321739721Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 23:16:59 policy-apex-pdp | [2024-04-29T23:15:04.054+00:00|INFO|ServiceManager|main] service manager starting 23:16:59 kafka | [2024-04-29 23:15:02,878] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 23:16:59 policy-pap | kafka (172.17.0.6:9092) open 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.322687433Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=950.392µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.325700753Z level=info msg="Executing migration" id="Add for to alert table" 23:16:59 policy-apex-pdp | [2024-04-29T23:15:04.054+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 23:16:59 kafka | [2024-04-29 23:15:02,878] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | Waiting for api port 6969... 23:16:59 zookeeper | ===> Running preflight checks ... 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.32997392Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.272577ms 23:16:59 policy-apex-pdp | [2024-04-29T23:15:04.055+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:59 kafka | [2024-04-29 23:15:02,878] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.336930921Z level=info msg="Executing migration" id="Add column uid in alert_notification" 23:16:59 policy-apex-pdp | [2024-04-29T23:15:04.073+00:00|INFO|ServiceManager|main] service manager started 23:16:59 kafka | [2024-04-29 23:15:02,878] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-pap | api (172.17.0.9:6969) open 23:16:59 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.340728972Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.796221ms 23:16:59 policy-apex-pdp | [2024-04-29T23:15:04.073+00:00|INFO|ServiceManager|main] service manager started 23:16:59 kafka | [2024-04-29 23:15:02,878] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 23:16:59 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.343524869Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 23:16:59 policy-apex-pdp | [2024-04-29T23:15:04.074+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 23:16:59 kafka | [2024-04-29 23:15:02,878] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 23:16:59 zookeeper | ===> Launching ... 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.343782362Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=257.073µs 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:04.073+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:59 policy-pap | 23:16:59 zookeeper | ===> Launching zookeeper ... 23:16:59 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.346072902Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:04.187+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-085fa03c-d2d9-404c-b0e2-72bc2e06aca2-2, groupId=085fa03c-d2d9-404c-b0e2-72bc2e06aca2] Cluster ID: 1q8HESR3R-yEc2qak37gtw 23:16:59 policy-pap | . ____ _ __ _ _ 23:16:59 zookeeper | [2024-04-29 23:14:27,631] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.347030895Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=957.803µs 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:04.187+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 1q8HESR3R-yEc2qak37gtw 23:16:59 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:59 zookeeper | [2024-04-29 23:14:27,638] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.349623219Z level=info msg="Executing migration" id="Remove unique index org_id_name" 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:04.189+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 23:16:59 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:59 zookeeper | [2024-04-29 23:14:27,638] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.350544231Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=920.821µs 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:04.197+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-085fa03c-d2d9-404c-b0e2-72bc2e06aca2-2, groupId=085fa03c-d2d9-404c-b0e2-72bc2e06aca2] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:59 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:59 zookeeper | [2024-04-29 23:14:27,638] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.355237823Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:04.202+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-085fa03c-d2d9-404c-b0e2-72bc2e06aca2-2, groupId=085fa03c-d2d9-404c-b0e2-72bc2e06aca2] (Re-)joining group 23:16:59 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:59 zookeeper | [2024-04-29 23:14:27,638] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.35953593Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=4.296617ms 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:04.215+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-085fa03c-d2d9-404c-b0e2-72bc2e06aca2-2, groupId=085fa03c-d2d9-404c-b0e2-72bc2e06aca2] Request joining group due to: need to re-join with the given member-id: consumer-085fa03c-d2d9-404c-b0e2-72bc2e06aca2-2-0386bef5-4364-47bf-87f4-bb00f58168cb 23:16:59 policy-pap | =========|_|==============|___/=/_/_/_/ 23:16:59 zookeeper | [2024-04-29 23:14:27,639] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:59 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.371659999Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:04.215+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-085fa03c-d2d9-404c-b0e2-72bc2e06aca2-2, groupId=085fa03c-d2d9-404c-b0e2-72bc2e06aca2] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:59 policy-pap | :: Spring Boot :: (v3.1.10) 23:16:59 zookeeper | [2024-04-29 23:14:27,639] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.372061085Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=404.126µs 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.37700643Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:04.215+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-085fa03c-d2d9-404c-b0e2-72bc2e06aca2-2, groupId=085fa03c-d2d9-404c-b0e2-72bc2e06aca2] (Re-)joining group 23:16:59 policy-pap | 23:16:59 zookeeper | [2024-04-29 23:14:27,639] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.378483679Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.477379ms 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:04.625+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 23:16:59 policy-pap | [2024-04-29T23:14:53.822+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 23:16:59 zookeeper | [2024-04-29 23:14:27,639] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.384148444Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:04.625+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 23:16:59 policy-pap | [2024-04-29T23:14:53.874+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.11 with PID 33 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 23:16:59 zookeeper | [2024-04-29 23:14:27,641] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.384925965Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=777.051µs 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:07.219+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-085fa03c-d2d9-404c-b0e2-72bc2e06aca2-2, groupId=085fa03c-d2d9-404c-b0e2-72bc2e06aca2] Successfully joined group with generation Generation{generationId=1, memberId='consumer-085fa03c-d2d9-404c-b0e2-72bc2e06aca2-2-0386bef5-4364-47bf-87f4-bb00f58168cb', protocol='range'} 23:16:59 policy-pap | [2024-04-29T23:14:53.876+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:59 zookeeper | [2024-04-29 23:14:27,641] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.387883364Z level=info msg="Executing migration" id="Drop old annotation table v4" 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:07.229+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-085fa03c-d2d9-404c-b0e2-72bc2e06aca2-2, groupId=085fa03c-d2d9-404c-b0e2-72bc2e06aca2] Finished assignment for group at generation 1: {consumer-085fa03c-d2d9-404c-b0e2-72bc2e06aca2-2-0386bef5-4364-47bf-87f4-bb00f58168cb=Assignment(partitions=[policy-pdp-pap-0])} 23:16:59 policy-pap | [2024-04-29T23:14:55.683+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:59 zookeeper | [2024-04-29 23:14:27,641] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.388033436Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=149.912µs 23:16:59 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:07.237+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-085fa03c-d2d9-404c-b0e2-72bc2e06aca2-2, groupId=085fa03c-d2d9-404c-b0e2-72bc2e06aca2] Successfully synced group in generation Generation{generationId=1, memberId='consumer-085fa03c-d2d9-404c-b0e2-72bc2e06aca2-2-0386bef5-4364-47bf-87f4-bb00f58168cb', protocol='range'} 23:16:59 policy-pap | [2024-04-29T23:14:55.767+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 76 ms. Found 7 JPA repository interfaces. 23:16:59 zookeeper | [2024-04-29 23:14:27,641] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.3906888Z level=info msg="Executing migration" id="create annotation table v5" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:07.237+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-085fa03c-d2d9-404c-b0e2-72bc2e06aca2-2, groupId=085fa03c-d2d9-404c-b0e2-72bc2e06aca2] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:59 policy-pap | [2024-04-29T23:14:56.206+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:59 zookeeper | [2024-04-29 23:14:27,641] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.392442473Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.788123ms 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:07.239+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-085fa03c-d2d9-404c-b0e2-72bc2e06aca2-2, groupId=085fa03c-d2d9-404c-b0e2-72bc2e06aca2] Adding newly assigned partitions: policy-pdp-pap-0 23:16:59 policy-pap | [2024-04-29T23:14:56.207+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:59 zookeeper | [2024-04-29 23:14:27,641] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.398798037Z level=info msg="Executing migration" id="add index annotation 0 v3" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:07.245+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-085fa03c-d2d9-404c-b0e2-72bc2e06aca2-2, groupId=085fa03c-d2d9-404c-b0e2-72bc2e06aca2] Found no committed offset for partition policy-pdp-pap-0 23:16:59 policy-pap | [2024-04-29T23:14:56.810+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:59 zookeeper | [2024-04-29 23:14:27,641] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.400664561Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.866594ms 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:07.255+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-085fa03c-d2d9-404c-b0e2-72bc2e06aca2-2, groupId=085fa03c-d2d9-404c-b0e2-72bc2e06aca2] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:59 policy-pap | [2024-04-29T23:14:56.819+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:59 zookeeper | [2024-04-29 23:14:27,652] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3246fb96 (org.apache.zookeeper.server.ServerMetrics) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.404343711Z level=info msg="Executing migration" id="add index annotation 1 v3" 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:23.879+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:59 policy-pap | [2024-04-29T23:14:56.821+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:59 zookeeper | [2024-04-29 23:14:27,655] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.405329843Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=982.832µs 23:16:59 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"11f23ac5-5fc1-4033-99a8-bb731eb89470","timestampMs":1714432523879,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup"} 23:16:59 policy-pap | [2024-04-29T23:14:56.821+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 23:16:59 zookeeper | [2024-04-29 23:14:27,655] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.408506275Z level=info msg="Executing migration" id="add index annotation 2 v3" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:23.899+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:59 policy-pap | [2024-04-29T23:14:56.913+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:59 zookeeper | [2024-04-29 23:14:27,657] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.409461907Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=956.782µs 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"11f23ac5-5fc1-4033-99a8-bb731eb89470","timestampMs":1714432523879,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup"} 23:16:59 policy-pap | [2024-04-29T23:14:56.913+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2971 ms 23:16:59 zookeeper | [2024-04-29 23:14:27,668] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.413991538Z level=info msg="Executing migration" id="add index annotation 3 v3" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:23.901+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:59 policy-pap | [2024-04-29T23:14:57.304+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:59 zookeeper | [2024-04-29 23:14:27,668] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.415790501Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.798064ms 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:24.045+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:59 policy-pap | [2024-04-29T23:14:57.360+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final 23:16:59 zookeeper | [2024-04-29 23:14:27,668] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.419601201Z level=info msg="Executing migration" id="add index annotation 4 v3" 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | {"source":"pap-84f9d567-fa59-4558-8d84-b060e7fa7b8f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"7edbd1fb-1d5c-4d52-8493-26ac0c4382f1","timestampMs":1714432523990,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:59 policy-pap | [2024-04-29T23:14:57.751+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:59 zookeeper | [2024-04-29 23:14:27,668] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.42179442Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=2.193379ms 23:16:59 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:24.064+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 23:16:59 policy-pap | [2024-04-29T23:14:57.849+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@14982a82 23:16:59 zookeeper | [2024-04-29 23:14:27,668] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.43234018Z level=info msg="Executing migration" id="Update annotation table charset" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:02,879] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:24.064+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 23:16:59 policy-pap | [2024-04-29T23:14:57.851+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:59 zookeeper | [2024-04-29 23:14:27,668] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.43238511Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=46.69µs 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:59 kafka | [2024-04-29 23:15:02,880] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f2a8074f-0013-446c-a352-ca4fd5931c01","timestampMs":1714432524064,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup"} 23:16:59 policy-pap | [2024-04-29T23:14:57.877+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect 23:16:59 zookeeper | [2024-04-29 23:14:27,668] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.438197397Z level=info msg="Executing migration" id="Add column region_id to annotation table" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:02,880] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:24.068+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:59 policy-pap | [2024-04-29T23:14:59.316+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] 23:16:59 zookeeper | [2024-04-29 23:14:27,668] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.444989616Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=6.789119ms 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:02,880] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"7edbd1fb-1d5c-4d52-8493-26ac0c4382f1","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"b71c8243-6027-4010-b6e7-510fa7dd1d94","timestampMs":1714432524068,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:59 policy-pap | [2024-04-29T23:14:59.327+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:59 zookeeper | [2024-04-29 23:14:27,668] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.448939708Z level=info msg="Executing migration" id="Drop category_id index" 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:02,880] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:24.079+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:59 policy-pap | [2024-04-29T23:14:59.796+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 23:16:59 zookeeper | [2024-04-29 23:14:27,668] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.45060943Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.670272ms 23:16:59 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 23:16:59 kafka | [2024-04-29 23:15:02,880] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f2a8074f-0013-446c-a352-ca4fd5931c01","timestampMs":1714432524064,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup"} 23:16:59 policy-pap | [2024-04-29T23:15:00.207+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 23:16:59 zookeeper | [2024-04-29 23:14:27,669] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.45443735Z level=info msg="Executing migration" id="Add column tags to annotation table" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:02,880] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:24.079+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:59 policy-pap | [2024-04-29T23:15:00.314+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.460317349Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=5.879379ms 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:24.085+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:59 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"7edbd1fb-1d5c-4d52-8493-26ac0c4382f1","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"b71c8243-6027-4010-b6e7-510fa7dd1d94","timestampMs":1714432524068,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:59 policy-pap | [2024-04-29T23:15:00.589+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:59 kafka | [2024-04-29 23:15:02,880] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.465638848Z level=info msg="Executing migration" id="Create annotation_tag table v2" 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-apex-pdp | [2024-04-29T23:15:24.085+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:59 policy-apex-pdp | [2024-04-29T23:15:24.113+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:59 policy-pap | allow.auto.create.topics = true 23:16:59 kafka | [2024-04-29 23:15:02,880] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.466297107Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=652.059µs 23:16:59 policy-db-migrator | 23:16:59 policy-apex-pdp | {"source":"pap-84f9d567-fa59-4558-8d84-b060e7fa7b8f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"7b662de9-702e-4d3b-a521-88c62a87dc66","timestampMs":1714432523991,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:59 policy-apex-pdp | [2024-04-29T23:15:24.116+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:59 policy-pap | auto.commit.interval.ms = 5000 23:16:59 kafka | [2024-04-29 23:15:03,042] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.470443311Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 23:16:59 policy-db-migrator | 23:16:59 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"7b662de9-702e-4d3b-a521-88c62a87dc66","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"0ccb37a6-3abc-4fbc-a096-737e342c9174","timestampMs":1714432524115,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:59 policy-apex-pdp | [2024-04-29T23:15:24.124+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:59 policy-pap | auto.include.jmx.reporter = true 23:16:59 kafka | [2024-04-29 23:15:03,043] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.471287562Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=844.081µs 23:16:59 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 23:16:59 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"7b662de9-702e-4d3b-a521-88c62a87dc66","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"0ccb37a6-3abc-4fbc-a096-737e342c9174","timestampMs":1714432524115,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:59 policy-apex-pdp | [2024-04-29T23:15:24.124+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:59 policy-pap | auto.offset.reset = latest 23:16:59 kafka | [2024-04-29 23:15:03,043] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.475505488Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-apex-pdp | [2024-04-29T23:15:24.145+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:59 policy-apex-pdp | {"source":"pap-84f9d567-fa59-4558-8d84-b060e7fa7b8f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"09959dc9-506c-4a36-a595-66f19d2e88a7","timestampMs":1714432524124,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:59 policy-pap | bootstrap.servers = [kafka:9092] 23:16:59 kafka | [2024-04-29 23:15:03,043] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.476636493Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.128575ms 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:59 policy-apex-pdp | [2024-04-29T23:15:24.146+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:59 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"09959dc9-506c-4a36-a595-66f19d2e88a7","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"f20e76c5-f318-4326-8046-cf76b3b0b2d1","timestampMs":1714432524146,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:59 policy-pap | check.crcs = true 23:16:59 kafka | [2024-04-29 23:15:03,043] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.480888269Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-apex-pdp | [2024-04-29T23:15:24.158+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:59 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"09959dc9-506c-4a36-a595-66f19d2e88a7","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"f20e76c5-f318-4326-8046-cf76b3b0b2d1","timestampMs":1714432524146,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:59 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:59 kafka | [2024-04-29 23:15:03,043] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.490761839Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=9.86931ms 23:16:59 policy-apex-pdp | [2024-04-29T23:15:24.158+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:59 policy-apex-pdp | [2024-04-29T23:15:56.160+00:00|INFO|RequestLog|qtp739264372-33] 172.17.0.2 - policyadmin [29/Apr/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10647 "-" "Prometheus/2.51.2" 23:16:59 policy-pap | client.id = consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-1 23:16:59 kafka | [2024-04-29 23:15:03,043] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.493773928Z level=info msg="Executing migration" id="Create annotation_tag table v3" 23:16:59 policy-apex-pdp | [2024-04-29T23:16:56.083+00:00|INFO|RequestLog|qtp739264372-28] 172.17.0.2 - policyadmin [29/Apr/2024:23:16:56 +0000] "GET /metrics HTTP/1.1" 200 10649 "-" "Prometheus/2.51.2" 23:16:59 policy-pap | client.rack = 23:16:59 kafka | [2024-04-29 23:15:03,043] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.49459543Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=821.162µs 23:16:59 zookeeper | [2024-04-29 23:14:27,669] INFO Server environment:host.name=3ba040297ad4 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-pap | connections.max.idle.ms = 540000 23:16:59 kafka | [2024-04-29 23:15:03,043] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.497751662Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 23:16:59 zookeeper | [2024-04-29 23:14:27,669] INFO Server environment:java.version=11.0.22 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-pap | default.api.timeout.ms = 60000 23:16:59 kafka | [2024-04-29 23:15:03,043] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.498740954Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=988.912µs 23:16:59 zookeeper | [2024-04-29 23:14:27,669] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-pap | enable.auto.commit = true 23:16:59 kafka | [2024-04-29 23:15:03,043] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.503611409Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 23:16:59 zookeeper | [2024-04-29 23:14:27,669] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-pap | exclude.internal.topics = true 23:16:59 policy-pap | fetch.max.bytes = 52428800 23:16:59 policy-pap | fetch.max.wait.ms = 500 23:16:59 policy-pap | fetch.min.bytes = 1 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.503975653Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=365.894µs 23:16:59 policy-pap | group.id = 138f9fa3-ce1b-405c-9d22-e6763c020d7f 23:16:59 policy-pap | group.instance.id = null 23:16:59 zookeeper | [2024-04-29 23:14:27,669] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.50672237Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 23:16:59 policy-db-migrator | 23:16:59 policy-pap | heartbeat.interval.ms = 3000 23:16:59 zookeeper | [2024-04-29 23:14:27,669] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-pap | interceptor.classes = [] 23:16:59 zookeeper | [2024-04-29 23:14:27,669] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-db-migrator | 23:16:59 zookeeper | [2024-04-29 23:14:27,669] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 kafka | [2024-04-29 23:15:03,043] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 policy-pap | internal.leave.group.on.close = true 23:16:59 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 23:16:59 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:59 zookeeper | [2024-04-29 23:14:27,669] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.507911136Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=1.187607ms 23:16:59 kafka | [2024-04-29 23:15:03,044] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 policy-pap | isolation.level = read_uncommitted 23:16:59 zookeeper | [2024-04-29 23:14:27,669] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.511944998Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 23:16:59 kafka | [2024-04-29 23:15:03,044] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:03,044] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.512371684Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=429.076µs 23:16:59 zookeeper | [2024-04-29 23:14:27,669] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:59 kafka | [2024-04-29 23:15:03,044] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,669] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-pap | max.partition.fetch.bytes = 1048576 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.518378733Z level=info msg="Executing migration" id="Add created time to annotation table" 23:16:59 kafka | [2024-04-29 23:15:03,044] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,669] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-pap | max.poll.interval.ms = 300000 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,044] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,669] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-pap | max.poll.records = 500 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.523178017Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.799433ms 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,044] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,669] INFO Server environment:os.memory.free=492MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-pap | metadata.max.age.ms = 300000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.531743459Z level=info msg="Executing migration" id="Add updated time to annotation table" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.538600919Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=6.8598ms 23:16:59 kafka | [2024-04-29 23:15:03,044] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,670] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-pap | metric.reporters = [] 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.542110156Z level=info msg="Executing migration" id="Add index for created in annotation table" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.543499654Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.389708ms 23:16:59 kafka | [2024-04-29 23:15:03,044] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,670] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-pap | metrics.num.samples = 2 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.5485076Z level=info msg="Executing migration" id="Add index for updated in annotation table" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.549588534Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.083004ms 23:16:59 kafka | [2024-04-29 23:15:03,044] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,670] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-pap | metrics.recording.level = INFO 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.553374965Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.553600728Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=225.973µs 23:16:59 kafka | [2024-04-29 23:15:03,044] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,670] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-pap | metrics.sample.window.ms = 30000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.55682406Z level=info msg="Executing migration" id="Add epoch_end column" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.561664274Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.839554ms 23:16:59 kafka | [2024-04-29 23:15:03,044] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,670] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.564854496Z level=info msg="Executing migration" id="Add index for epoch_end" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.56593034Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.076444ms 23:16:59 kafka | [2024-04-29 23:15:03,045] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,670] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-pap | receive.buffer.bytes = 65536 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.569938933Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.570135416Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=197.133µs 23:16:59 kafka | [2024-04-29 23:15:03,045] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,670] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-pap | reconnect.backoff.max.ms = 1000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.573581141Z level=info msg="Executing migration" id="Move region to single row" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.574062837Z level=info msg="Migration successfully executed" id="Move region to single row" duration=482.086µs 23:16:59 kafka | [2024-04-29 23:15:03,045] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,670] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-pap | reconnect.backoff.ms = 50 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.5788343Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.579891384Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.095134ms 23:16:59 kafka | [2024-04-29 23:15:03,045] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,670] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-pap | request.timeout.ms = 30000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.584468534Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.585378126Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=910.142µs 23:16:59 kafka | [2024-04-29 23:15:03,045] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,671] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 23:16:59 policy-pap | retry.backoff.ms = 100 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.588840992Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.589838865Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=998.003µs 23:16:59 kafka | [2024-04-29 23:15:03,045] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,672] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-pap | sasl.client.callback.handler.class = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.592877436Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.593818838Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=940.813µs 23:16:59 kafka | [2024-04-29 23:15:03,045] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,672] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-pap | sasl.jaas.config = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.598778313Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.599637084Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=858.691µs 23:16:59 kafka | [2024-04-29 23:15:03,045] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,673] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:59 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.60387735Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.604928284Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.054894ms 23:16:59 kafka | [2024-04-29 23:15:03,045] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,673] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:59 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.610403866Z level=info msg="Executing migration" id="Increase tags column to length 4096" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.610519438Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=116.392µs 23:16:59 kafka | [2024-04-29 23:15:03,045] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,673] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:59 policy-pap | sasl.kerberos.service.name = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.614861105Z level=info msg="Executing migration" id="create test_data table" 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,045] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,673] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:59 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.616059761Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.202035ms 23:16:59 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 23:16:59 kafka | [2024-04-29 23:15:03,045] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,674] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:59 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.620685862Z level=info msg="Executing migration" id="create dashboard_version table v1" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,046] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,674] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:59 policy-pap | sasl.login.callback.handler.class = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.621681445Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=996.103µs 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:59 kafka | [2024-04-29 23:15:03,046] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 policy-pap | sasl.login.class = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.625753029Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 23:16:59 zookeeper | [2024-04-29 23:14:27,674] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,046] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 policy-pap | sasl.login.connect.timeout.ms = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.626686571Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=933.532µs 23:16:59 zookeeper | [2024-04-29 23:14:27,674] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,046] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 policy-pap | sasl.login.read.timeout.ms = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.629733631Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 23:16:59 zookeeper | [2024-04-29 23:14:27,676] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,046] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:59 zookeeper | [2024-04-29 23:14:27,676] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.630774974Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.039363ms 23:16:59 kafka | [2024-04-29 23:15:03,046] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:59 zookeeper | [2024-04-29 23:14:27,676] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.635204933Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 23:16:59 kafka | [2024-04-29 23:15:03,046] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:59 zookeeper | [2024-04-29 23:14:27,676] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.635386495Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=182.062µs 23:16:59 kafka | [2024-04-29 23:15:03,046] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:59 zookeeper | [2024-04-29 23:14:27,676] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.638400905Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 23:16:59 kafka | [2024-04-29 23:15:03,046] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:59 zookeeper | [2024-04-29 23:14:27,695] INFO Logging initialized @497ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.639007903Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=609.868µs 23:16:59 kafka | [2024-04-29 23:15:03,046] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:59 zookeeper | [2024-04-29 23:14:27,774] WARN o.e.j.s.ServletContextHandler@311bf055{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.641980522Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 23:16:59 kafka | [2024-04-29 23:15:03,046] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 policy-pap | sasl.mechanism = GSSAPI 23:16:59 zookeeper | [2024-04-29 23:14:27,774] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 23:16:59 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.642085993Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=102.881µs 23:16:59 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:59 kafka | [2024-04-29 23:15:03,046] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,792] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 11.0.22+7-LTS (org.eclipse.jetty.server.Server) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.646681354Z level=info msg="Executing migration" id="create team table" 23:16:59 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:59 kafka | [2024-04-29 23:15:03,047] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,823] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.647543365Z level=info msg="Migration successfully executed" id="create team table" duration=861.151µs 23:16:59 kafka | [2024-04-29 23:15:03,047] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,823] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 23:16:59 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.651992635Z level=info msg="Executing migration" id="add index team.org_id" 23:16:59 kafka | [2024-04-29 23:15:03,047] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,826] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) 23:16:59 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.652998427Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.005653ms 23:16:59 kafka | [2024-04-29 23:15:03,052] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,830] WARN ServletContext@o.e.j.s.ServletContextHandler@311bf055{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 23:16:59 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.657873942Z level=info msg="Executing migration" id="add unique index team_org_id_name" 23:16:59 kafka | [2024-04-29 23:15:03,052] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,841] INFO Started o.e.j.s.ServletContextHandler@311bf055{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 23:16:59 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:59 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.659290621Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.419449ms 23:16:59 kafka | [2024-04-29 23:15:03,052] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,860] INFO Started ServerConnector@6f53b8a{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 23:16:59 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.662797897Z level=info msg="Executing migration" id="Add column uid in team" 23:16:59 kafka | [2024-04-29 23:15:03,052] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,860] INFO Started @663ms (org.eclipse.jetty.server.Server) 23:16:59 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.66688692Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.088963ms 23:16:59 kafka | [2024-04-29 23:15:03,052] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,860] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 23:16:59 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.67059749Z level=info msg="Executing migration" id="Update uid column values in team" 23:16:59 kafka | [2024-04-29 23:15:03,052] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,864] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:59 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.670848913Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=253.893µs 23:16:59 kafka | [2024-04-29 23:15:03,052] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,865] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:59 policy-pap | security.protocol = PLAINTEXT 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.677997967Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 23:16:59 kafka | [2024-04-29 23:15:03,052] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,866] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:59 policy-pap | security.providers = null 23:16:59 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.679517298Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.51839ms 23:16:59 kafka | [2024-04-29 23:15:03,052] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,867] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:59 policy-pap | send.buffer.bytes = 131072 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.687783076Z level=info msg="Executing migration" id="create team member table" 23:16:59 kafka | [2024-04-29 23:15:03,052] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,878] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:59 policy-pap | session.timeout.ms = 45000 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.68884235Z level=info msg="Migration successfully executed" id="create team member table" duration=1.058894ms 23:16:59 kafka | [2024-04-29 23:15:03,052] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,878] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:59 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.692522298Z level=info msg="Executing migration" id="add index team_member.org_id" 23:16:59 kafka | [2024-04-29 23:15:03,053] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,880] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 23:16:59 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.693449001Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=926.553µs 23:16:59 kafka | [2024-04-29 23:15:03,053] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,880] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 23:16:59 policy-pap | ssl.cipher.suites = null 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.6986594Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 23:16:59 kafka | [2024-04-29 23:15:03,053] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,885] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 23:16:59 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:59 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.699638173Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=981.393µs 23:16:59 kafka | [2024-04-29 23:15:03,053] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,885] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:59 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.702560331Z level=info msg="Executing migration" id="add index team_member.team_id" 23:16:59 kafka | [2024-04-29 23:15:03,053] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 23:16:59 zookeeper | [2024-04-29 23:14:27,888] INFO Snapshot loaded in 9 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 23:16:59 policy-pap | ssl.engine.factory.class = null 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.703486553Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=926.382µs 23:16:59 zookeeper | [2024-04-29 23:14:27,889] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:59 kafka | [2024-04-29 23:15:03,053] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 23:16:59 policy-pap | ssl.key.password = null 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.707326374Z level=info msg="Executing migration" id="Add column email to team table" 23:16:59 zookeeper | [2024-04-29 23:14:27,889] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:59 kafka | [2024-04-29 23:15:03,053] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 23:16:59 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.712002385Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.675491ms 23:16:59 zookeeper | [2024-04-29 23:14:27,898] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 23:16:59 kafka | [2024-04-29 23:15:03,053] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 23:16:59 policy-pap | ssl.keystore.certificate.chain = null 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.715536132Z level=info msg="Executing migration" id="Add column external to team_member table" 23:16:59 zookeeper | [2024-04-29 23:14:27,898] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 23:16:59 kafka | [2024-04-29 23:15:03,053] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 23:16:59 policy-pap | ssl.keystore.key = null 23:16:59 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.720018881Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.482869ms 23:16:59 zookeeper | [2024-04-29 23:14:27,916] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 23:16:59 kafka | [2024-04-29 23:15:03,053] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 23:16:59 policy-pap | ssl.keystore.location = null 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.727142375Z level=info msg="Executing migration" id="Add column permission to team_member table" 23:16:59 zookeeper | [2024-04-29 23:14:27,917] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 23:16:59 kafka | [2024-04-29 23:15:03,053] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 23:16:59 policy-pap | ssl.keystore.password = null 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.731614124Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.471068ms 23:16:59 zookeeper | [2024-04-29 23:14:28,941] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 23:16:59 kafka | [2024-04-29 23:15:03,053] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 23:16:59 policy-pap | ssl.keystore.type = JKS 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.734729346Z level=info msg="Executing migration" id="create dashboard acl table" 23:16:59 kafka | [2024-04-29 23:15:03,053] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 23:16:59 policy-pap | ssl.protocol = TLSv1.3 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.735644457Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=914.872µs 23:16:59 kafka | [2024-04-29 23:15:03,053] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 23:16:59 policy-pap | ssl.provider = null 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.743197587Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 23:16:59 kafka | [2024-04-29 23:15:03,053] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 23:16:59 policy-pap | ssl.secure.random.implementation = null 23:16:59 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.744771217Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.57398ms 23:16:59 kafka | [2024-04-29 23:15:03,053] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 23:16:59 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.749443439Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 23:16:59 kafka | [2024-04-29 23:15:03,053] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 23:16:59 policy-pap | ssl.truststore.certificates = null 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.750642965Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.199176ms 23:16:59 kafka | [2024-04-29 23:15:03,054] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 23:16:59 policy-pap | ssl.truststore.location = null 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.754548557Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 23:16:59 kafka | [2024-04-29 23:15:03,054] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.755498089Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=949.072µs 23:16:59 policy-pap | ssl.truststore.password = null 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.762106546Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 23:16:59 kafka | [2024-04-29 23:15:03,054] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 23:16:59 policy-pap | ssl.truststore.type = JKS 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.763086629Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=980.133µs 23:16:59 kafka | [2024-04-29 23:15:03,054] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 23:16:59 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 23:16:59 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:59 kafka | [2024-04-29 23:15:03,054] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.767116442Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 23:16:59 policy-pap | 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.768776214Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.661682ms 23:16:59 kafka | [2024-04-29 23:15:03,054] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 23:16:59 policy-pap | [2024-04-29T23:15:00.776+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.772695445Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 23:16:59 kafka | [2024-04-29 23:15:03,054] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.774160335Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.46447ms 23:16:59 policy-pap | [2024-04-29T23:15:00.777+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.780521529Z level=info msg="Executing migration" id="add index dashboard_permission" 23:16:59 kafka | [2024-04-29 23:15:03,054] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 23:16:59 policy-db-migrator | > upgrade 0450-pdpgroup.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.781485781Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=966.372µs 23:16:59 policy-pap | [2024-04-29T23:15:00.777+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714432500774 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,054] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.786525858Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 23:16:59 policy-pap | [2024-04-29T23:15:00.780+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-1, groupId=138f9fa3-ce1b-405c-9d22-e6763c020d7f] Subscribed to topic(s): policy-pdp-pap 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 23:16:59 kafka | [2024-04-29 23:15:03,054] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.787306108Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=780.61µs 23:16:59 policy-pap | [2024-04-29T23:15:00.781+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,054] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.791696826Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 23:16:59 policy-pap | allow.auto.create.topics = true 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,054] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 23:16:59 policy-pap | auto.commit.interval.ms = 5000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.792130462Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=434.285µs 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,054] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.797864747Z level=info msg="Executing migration" id="create tag table" 23:16:59 kafka | [2024-04-29 23:15:03,054] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 23:16:59 policy-pap | auto.include.jmx.reporter = true 23:16:59 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.799111933Z level=info msg="Migration successfully executed" id="create tag table" duration=1.249316ms 23:16:59 kafka | [2024-04-29 23:15:03,054] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 23:16:59 policy-pap | auto.offset.reset = latest 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.804229581Z level=info msg="Executing migration" id="add index tag.key_value" 23:16:59 kafka | [2024-04-29 23:15:03,054] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.805171514Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=941.953µs 23:16:59 kafka | [2024-04-29 23:15:03,054] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.810406963Z level=info msg="Executing migration" id="create login attempt table" 23:16:59 policy-pap | bootstrap.servers = [kafka:9092] 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,055] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 23:16:59 policy-pap | check.crcs = true 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.811650449Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.243796ms 23:16:59 kafka | [2024-04-29 23:15:03,055] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.818813644Z level=info msg="Executing migration" id="add index login_attempt.username" 23:16:59 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:59 policy-db-migrator | > upgrade 0470-pdp.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.820446525Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.632631ms 23:16:59 policy-pap | client.id = consumer-policy-pap-2 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.825045286Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 23:16:59 kafka | [2024-04-29 23:15:03,055] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | client.rack = 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.826080179Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.036383ms 23:16:59 kafka | [2024-04-29 23:15:03,055] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:59 policy-pap | connections.max.idle.ms = 540000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.831512901Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 23:16:59 kafka | [2024-04-29 23:15:03,055] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | default.api.timeout.ms = 60000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.8473422Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=15.828899ms 23:16:59 policy-pap | enable.auto.commit = true 23:16:59 kafka | [2024-04-29 23:15:03,055] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.851364202Z level=info msg="Executing migration" id="create login_attempt v2" 23:16:59 policy-pap | exclude.internal.topics = true 23:16:59 kafka | [2024-04-29 23:15:03,056] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.851921699Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=555.107µs 23:16:59 policy-pap | fetch.max.bytes = 52428800 23:16:59 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.857131488Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 23:16:59 kafka | [2024-04-29 23:15:03,063] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 23:16:59 policy-pap | fetch.max.wait.ms = 500 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.85870719Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.574941ms 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | fetch.min.bytes = 1 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.863679265Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | group.id = policy-pap 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.863989989Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=304.164µs 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | group.instance.id = null 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.869081295Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | heartbeat.interval.ms = 3000 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.870104629Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.020844ms 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | interceptor.classes = [] 23:16:59 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 23:16:59 policy-pap | internal.leave.group.on.close = true 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.876678796Z level=info msg="Executing migration" id="create user auth table" 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.878551381Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.871725ms 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:59 policy-pap | isolation.level = read_uncommitted 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.88454301Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.885622474Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.078714ms 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | max.partition.fetch.bytes = 1048576 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.892342733Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | max.poll.interval.ms = 300000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.892488935Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=147.372µs 23:16:59 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | max.poll.records = 500 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.898448783Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | metadata.max.age.ms = 300000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.906314167Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=7.864514ms 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | metric.reporters = [] 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.910783685Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | metrics.num.samples = 2 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.919240027Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=8.455131ms 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | metrics.recording.level = INFO 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.924705159Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | metrics.sample.window.ms = 30000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.930550446Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.845337ms 23:16:59 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.935767004Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | receive.buffer.bytes = 65536 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.941322198Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.552784ms 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | reconnect.backoff.max.ms = 1000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.950953085Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | reconnect.backoff.ms = 50 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.952159411Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.206016ms 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | request.timeout.ms = 30000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.958939141Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | retry.backoff.ms = 100 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.968689049Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=9.753228ms 23:16:59 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.client.callback.handler.class = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.974624717Z level=info msg="Executing migration" id="create server_lock table" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.jaas.config = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.975252065Z level=info msg="Migration successfully executed" id="create server_lock table" duration=627.048µs 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.980838778Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.982666603Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.831235ms 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.kerberos.service.name = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.988034624Z level=info msg="Executing migration" id="create user auth token table" 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,065] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.989020957Z level=info msg="Migration successfully executed" id="create user auth token table" duration=985.833µs 23:16:59 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 23:16:59 kafka | [2024-04-29 23:15:03,066] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.995216168Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,066] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.login.callback.handler.class = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:28.996817159Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.599891ms 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:59 kafka | [2024-04-29 23:15:03,066] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.login.class = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.000641159Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,066] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.login.connect.timeout.ms = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.002505974Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.866005ms 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,066] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.login.read.timeout.ms = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.006394594Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,066] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.007448921Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.054087ms 23:16:59 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 23:16:59 kafka | [2024-04-29 23:15:03,066] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.012805921Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,066] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.021399075Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=8.594464ms 23:16:59 kafka | [2024-04-29 23:15:03,066] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.02566485Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 23:16:59 kafka | [2024-04-29 23:15:03,066] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.026644209Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=976.989µs 23:16:59 kafka | [2024-04-29 23:15:03,066] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.031076456Z level=info msg="Executing migration" id="create cache_data table" 23:16:59 kafka | [2024-04-29 23:15:03,066] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.mechanism = GSSAPI 23:16:59 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.032044685Z level=info msg="Migration successfully executed" id="create cache_data table" duration=969.379µs 23:16:59 kafka | [2024-04-29 23:15:03,066] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.03624377Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 23:16:59 kafka | [2024-04-29 23:15:03,066] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.037533321Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.290381ms 23:16:59 kafka | [2024-04-29 23:15:03,066] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.04218167Z level=info msg="Executing migration" id="create short_url table v1" 23:16:59 kafka | [2024-04-29 23:15:03,066] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.043819654Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.637484ms 23:16:59 kafka | [2024-04-29 23:15:03,066] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.048469713Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 23:16:59 kafka | [2024-04-29 23:15:03,066] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:59 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.050235488Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.764995ms 23:16:59 kafka | [2024-04-29 23:15:03,066] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.053511026Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 23:16:59 kafka | [2024-04-29 23:15:03,066] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.053646097Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=135.041µs 23:16:59 kafka | [2024-04-29 23:15:03,066] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.057089567Z level=info msg="Executing migration" id="delete alert_definition table" 23:16:59 kafka | [2024-04-29 23:15:03,066] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.057169848Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=79.801µs 23:16:59 kafka | [2024-04-29 23:15:03,066] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 23:16:59 policy-pap | security.protocol = PLAINTEXT 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.060232853Z level=info msg="Executing migration" id="recreate alert_definition table" 23:16:59 kafka | [2024-04-29 23:15:03,066] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:59 policy-pap | security.providers = null 23:16:59 policy-db-migrator | > upgrade 0570-toscadatatype.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.06104793Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=814.797µs 23:16:59 kafka | [2024-04-29 23:15:03,069] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:16:59 policy-pap | send.buffer.bytes = 131072 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.066068513Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 23:16:59 kafka | [2024-04-29 23:15:03,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-pap | session.timeout.ms = 45000 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.06699054Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=921.917µs 23:16:59 kafka | [2024-04-29 23:15:03,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.070150287Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 23:16:59 kafka | [2024-04-29 23:15:03,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.071076875Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=926.178µs 23:16:59 kafka | [2024-04-29 23:15:03,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-pap | ssl.cipher.suites = null 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.074505414Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 23:16:59 kafka | [2024-04-29 23:15:03,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:59 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.074690746Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=185.102µs 23:16:59 kafka | [2024-04-29 23:15:03,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.07986471Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 23:16:59 kafka | [2024-04-29 23:15:03,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-pap | ssl.engine.factory.class = null 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.081014039Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.148949ms 23:16:59 kafka | [2024-04-29 23:15:03,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-pap | ssl.key.password = null 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.085995591Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 23:16:59 kafka | [2024-04-29 23:15:03,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.087593204Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.596983ms 23:16:59 kafka | [2024-04-29 23:15:03,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-pap | ssl.keystore.certificate.chain = null 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.09179131Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 23:16:59 kafka | [2024-04-29 23:15:03,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-pap | ssl.keystore.key = null 23:16:59 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.092806848Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.015328ms 23:16:59 kafka | [2024-04-29 23:15:03,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-pap | ssl.keystore.location = null 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.097490158Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 23:16:59 kafka | [2024-04-29 23:15:03,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-pap | ssl.keystore.password = null 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.098529357Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.038749ms 23:16:59 kafka | [2024-04-29 23:15:03,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-pap | ssl.keystore.type = JKS 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.103103076Z level=info msg="Executing migration" id="Add column paused in alert_definition" 23:16:59 kafka | [2024-04-29 23:15:03,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-pap | ssl.protocol = TLSv1.3 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.108927555Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.823889ms 23:16:59 kafka | [2024-04-29 23:15:03,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-pap | ssl.provider = null 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.113565235Z level=info msg="Executing migration" id="drop alert_definition table" 23:16:59 kafka | [2024-04-29 23:15:03,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-pap | ssl.secure.random.implementation = null 23:16:59 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.114554153Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=985.828µs 23:16:59 kafka | [2024-04-29 23:15:03,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.122382419Z level=info msg="Executing migration" id="delete alert_definition_version table" 23:16:59 kafka | [2024-04-29 23:15:03,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-pap | ssl.truststore.certificates = null 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.122605491Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=223.662µs 23:16:59 kafka | [2024-04-29 23:15:03,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-pap | ssl.truststore.location = null 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.127685954Z level=info msg="Executing migration" id="recreate alert_definition_version table" 23:16:59 policy-pap | ssl.truststore.password = null 23:16:59 kafka | [2024-04-29 23:15:03,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.129350058Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.663284ms 23:16:59 policy-pap | ssl.truststore.type = JKS 23:16:59 kafka | [2024-04-29 23:15:03,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.134561742Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 23:16:59 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:59 kafka | [2024-04-29 23:15:03,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.135748813Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.18318ms 23:16:59 policy-pap | 23:16:59 kafka | [2024-04-29 23:15:03,070] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.143235385Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 23:16:59 policy-pap | [2024-04-29T23:15:00.795+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.145158602Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.922887ms 23:16:59 policy-pap | [2024-04-29T23:15:00.795+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.149819652Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 23:16:59 policy-pap | [2024-04-29T23:15:00.795+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714432500795 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.150198085Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=377.533µs 23:16:59 policy-pap | [2024-04-29T23:15:00.796+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.1544566Z level=info msg="Executing migration" id="drop alert_definition_version table" 23:16:59 policy-pap | [2024-04-29T23:15:01.102+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.155460709Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.004199ms 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | [2024-04-29T23:15:01.249+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.160527921Z level=info msg="Executing migration" id="create alert_instance table" 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:59 policy-pap | [2024-04-29T23:15:01.469+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@77db231c, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@60b4d934, org.springframework.security.web.context.SecurityContextHolderFilter@5ffdd510, org.springframework.security.web.header.HeaderWriterFilter@29dfc68f, org.springframework.security.web.authentication.logout.LogoutFilter@3b1137b0, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@5d98364c, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@6719f206, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@344a065a, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@6a3e633a, org.springframework.security.web.access.ExceptionTranslationFilter@22172b00, org.springframework.security.web.access.intercept.AuthorizationFilter@2435c6ae] 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.161607651Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.07907ms 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | [2024-04-29T23:15:02.206+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.165729936Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 23:16:59 policy-db-migrator | 23:16:59 policy-pap | [2024-04-29T23:15:02.300+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.166877966Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.1507ms 23:16:59 policy-db-migrator | 23:16:59 policy-pap | [2024-04-29T23:15:02.324+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.171974859Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 23:16:59 policy-db-migrator | > upgrade 0630-toscanodetype.sql 23:16:59 policy-pap | [2024-04-29T23:15:02.341+00:00|INFO|ServiceManager|main] Policy PAP starting 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.173110778Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.135579ms 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | [2024-04-29T23:15:02.341+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.179063409Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 23:16:59 policy-pap | [2024-04-29T23:15:02.342+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.18982847Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=10.763611ms 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | [2024-04-29T23:15:02.343+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.192902876Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 23:16:59 policy-db-migrator | 23:16:59 policy-pap | [2024-04-29T23:15:02.343+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.193705402Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=803.036µs 23:16:59 policy-db-migrator | 23:16:59 policy-pap | [2024-04-29T23:15:02.344+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.199111018Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 23:16:59 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 23:16:59 policy-pap | [2024-04-29T23:15:02.344+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.200049397Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=935.539µs 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | [2024-04-29T23:15:02.345+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=138f9fa3-ce1b-405c-9d22-e6763c020d7f, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@17e8caf2 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.204604075Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 23:16:59 policy-pap | [2024-04-29T23:15:02.356+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=138f9fa3-ce1b-405c-9d22-e6763c020d7f, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.23121026Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=26.605675ms 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | [2024-04-29T23:15:02.357+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.236800718Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 23:16:59 policy-db-migrator | 23:16:59 policy-pap | allow.auto.create.topics = true 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.265785393Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=28.981205ms 23:16:59 policy-db-migrator | 23:16:59 policy-pap | auto.commit.interval.ms = 5000 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.269396703Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 23:16:59 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 23:16:59 policy-pap | auto.include.jmx.reporter = true 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.270493423Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.09666ms 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | auto.offset.reset = latest 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.277641542Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:59 policy-pap | bootstrap.servers = [kafka:9092] 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.279332397Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.690975ms 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | check.crcs = true 23:16:59 kafka | [2024-04-29 23:15:03,071] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.285083466Z level=info msg="Executing migration" id="add current_reason column related to current_state" 23:16:59 policy-db-migrator | 23:16:59 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:59 kafka | [2024-04-29 23:15:03,119] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.293124703Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=8.039867ms 23:16:59 policy-db-migrator | 23:16:59 policy-pap | client.id = consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3 23:16:59 kafka | [2024-04-29 23:15:03,119] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.296450081Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 23:16:59 policy-db-migrator | > upgrade 0660-toscaparameter.sql 23:16:59 policy-pap | client.rack = 23:16:59 kafka | [2024-04-29 23:15:03,119] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.305200955Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=8.749494ms 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | connections.max.idle.ms = 540000 23:16:59 kafka | [2024-04-29 23:15:03,119] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.310387939Z level=info msg="Executing migration" id="create alert_rule table" 23:16:59 policy-pap | default.api.timeout.ms = 60000 23:16:59 kafka | [2024-04-29 23:15:03,119] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.311433977Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.045878ms 23:16:59 policy-pap | enable.auto.commit = true 23:16:59 kafka | [2024-04-29 23:15:03,119] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.316423349Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 23:16:59 policy-pap | exclude.internal.topics = true 23:16:59 kafka | [2024-04-29 23:15:03,119] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.317382317Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=956.428µs 23:16:59 policy-pap | fetch.max.bytes = 52428800 23:16:59 kafka | [2024-04-29 23:15:03,119] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.321595233Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 23:16:59 policy-pap | fetch.max.wait.ms = 500 23:16:59 kafka | [2024-04-29 23:15:03,119] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:59 policy-db-migrator | > upgrade 0670-toscapolicies.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.323993114Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=2.396891ms 23:16:59 policy-pap | fetch.min.bytes = 1 23:16:59 kafka | [2024-04-29 23:15:03,119] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.37819967Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 23:16:59 policy-pap | group.id = 138f9fa3-ce1b-405c-9d22-e6763c020d7f 23:16:59 kafka | [2024-04-29 23:15:03,119] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.380672581Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=2.473921ms 23:16:59 policy-pap | group.instance.id = null 23:16:59 kafka | [2024-04-29 23:15:03,119] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.385763794Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 23:16:59 policy-pap | heartbeat.interval.ms = 3000 23:16:59 kafka | [2024-04-29 23:15:03,119] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.385810604Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=47.42µs 23:16:59 policy-pap | interceptor.classes = [] 23:16:59 kafka | [2024-04-29 23:15:03,119] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.390329663Z level=info msg="Executing migration" id="add column for to alert_rule" 23:16:59 policy-pap | internal.leave.group.on.close = true 23:16:59 kafka | [2024-04-29 23:15:03,119] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:59 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.400430728Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=10.099785ms 23:16:59 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:59 kafka | [2024-04-29 23:15:03,119] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.409162021Z level=info msg="Executing migration" id="add column annotations to alert_rule" 23:16:59 policy-pap | isolation.level = read_uncommitted 23:16:59 kafka | [2024-04-29 23:15:03,119] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.415658236Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.493785ms 23:16:59 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:59 kafka | [2024-04-29 23:15:03,119] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.420558567Z level=info msg="Executing migration" id="add column labels to alert_rule" 23:16:59 policy-pap | max.partition.fetch.bytes = 1048576 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,119] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.42669583Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.136663ms 23:16:59 policy-pap | max.poll.interval.ms = 300000 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,119] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.432119724Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 23:16:59 policy-pap | max.poll.records = 500 23:16:59 policy-db-migrator | > upgrade 0690-toscapolicy.sql 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.433644168Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.585774ms 23:16:59 policy-pap | metadata.max.age.ms = 300000 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.442045578Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 23:16:59 policy-pap | metric.reporters = [] 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.443285908Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.24243ms 23:16:59 policy-pap | metrics.num.samples = 2 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.449888905Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 23:16:59 policy-pap | metrics.recording.level = INFO 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.456321819Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.421584ms 23:16:59 policy-pap | metrics.sample.window.ms = 30000 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.462929834Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 23:16:59 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:59 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.469232708Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.302794ms 23:16:59 policy-pap | receive.buffer.bytes = 65536 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.474863795Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 23:16:59 policy-pap | reconnect.backoff.max.ms = 1000 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:59 policy-pap | reconnect.backoff.ms = 50 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.475890444Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.031229ms 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:59 policy-pap | request.timeout.ms = 30000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.480920526Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:59 policy-pap | retry.backoff.ms = 100 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.487037918Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.115611ms 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:59 policy-pap | sasl.client.callback.handler.class = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.493149249Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 23:16:59 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:59 policy-pap | sasl.jaas.config = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.501306528Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=8.153609ms 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:59 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.505305452Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:59 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.505389923Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=85.541µs 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:59 policy-pap | sasl.kerberos.service.name = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.510648326Z level=info msg="Executing migration" id="create alert_rule_version table" 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:59 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.511840137Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.191591ms 23:16:59 policy-db-migrator | 23:16:59 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:59 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.517561465Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 23:16:59 policy-pap | sasl.login.callback.handler.class = null 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.519994456Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=2.429931ms 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:59 policy-pap | sasl.login.class = null 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.527117685Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:59 policy-pap | sasl.login.connect.timeout.ms = null 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.528625298Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.509293ms 23:16:59 policy-pap | sasl.login.read.timeout.ms = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.535296964Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.535439265Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=143.861µs 23:16:59 policy-db-migrator | 23:16:59 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.540870321Z level=info msg="Executing migration" id="add column for to alert_rule_version" 23:16:59 policy-db-migrator | > upgrade 0730-toscaproperty.sql 23:16:59 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.549874277Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=9.001916ms 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.555928299Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:59 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.566618608Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=10.685349ms 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.573217824Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 23:16:59 policy-db-migrator | 23:16:59 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.581838347Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=8.610783ms 23:16:59 policy-db-migrator | 23:16:59 policy-pap | sasl.mechanism = GSSAPI 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.588462592Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 23:16:59 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 23:16:59 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.597705741Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=9.236819ms 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:59 kafka | [2024-04-29 23:15:03,120] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.604279606Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 23:16:59 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.612370684Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=8.094338ms 23:16:59 kafka | [2024-04-29 23:15:03,121] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.624569207Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 23:16:59 kafka | [2024-04-29 23:15:03,121] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.624696348Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=131.041µs 23:16:59 kafka | [2024-04-29 23:15:03,190] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.628638002Z level=info msg="Executing migration" id=create_alert_configuration_table 23:16:59 kafka | [2024-04-29 23:15:03,201] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 23:16:59 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.629311757Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=673.645µs 23:16:59 kafka | [2024-04-29 23:15:03,203] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.635047606Z level=info msg="Executing migration" id="Add column default in alert_configuration" 23:16:59 kafka | [2024-04-29 23:15:03,204] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 23:16:59 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.641738381Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.690745ms 23:16:59 kafka | [2024-04-29 23:15:03,205] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.646727194Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 23:16:59 kafka | [2024-04-29 23:15:03,215] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | security.protocol = PLAINTEXT 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.646811264Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=84.81µs 23:16:59 policy-db-migrator | 23:16:59 policy-pap | security.providers = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.651454634Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 23:16:59 kafka | [2024-04-29 23:15:03,216] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 23:16:59 policy-pap | send.buffer.bytes = 131072 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.660933313Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=9.477149ms 23:16:59 kafka | [2024-04-29 23:15:03,216] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | session.timeout.ms = 45000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.666205468Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 23:16:59 kafka | [2024-04-29 23:15:03,216] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:59 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.667032075Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=826.407µs 23:16:59 kafka | [2024-04-29 23:15:03,216] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:03,225] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.672004737Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 23:16:59 kafka | [2024-04-29 23:15:03,227] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-pap | ssl.cipher.suites = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.677856656Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=5.854349ms 23:16:59 kafka | [2024-04-29 23:15:03,230] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 23:16:59 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.681161864Z level=info msg="Executing migration" id=create_ngalert_configuration_table 23:16:59 kafka | [2024-04-29 23:15:03,230] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.681713779Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=551.895µs 23:16:59 kafka | [2024-04-29 23:15:03,231] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-pap | ssl.engine.factory.class = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.688445395Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 23:16:59 kafka | [2024-04-29 23:15:03,239] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-pap | ssl.key.password = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.689276902Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=831.557µs 23:16:59 kafka | [2024-04-29 23:15:03,239] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.693027945Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 23:16:59 kafka | [2024-04-29 23:15:03,239] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 23:16:59 policy-pap | ssl.keystore.certificate.chain = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.700154114Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=7.125189ms 23:16:59 kafka | [2024-04-29 23:15:03,239] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-pap | ssl.keystore.key = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.705970453Z level=info msg="Executing migration" id="create provenance_type table" 23:16:59 kafka | [2024-04-29 23:15:03,240] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-pap | ssl.keystore.location = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.706529708Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=558.995µs 23:16:59 kafka | [2024-04-29 23:15:03,247] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-pap | ssl.keystore.password = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.711485369Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 23:16:59 kafka | [2024-04-29 23:15:03,248] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-pap | ssl.keystore.type = JKS 23:16:59 kafka | [2024-04-29 23:15:03,248] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.713038733Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.552684ms 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.719157935Z level=info msg="Executing migration" id="create alert_image table" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,248] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-pap | ssl.protocol = TLSv1.3 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.719718599Z level=info msg="Migration successfully executed" id="create alert_image table" duration=561.235µs 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,248] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-pap | ssl.provider = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.72347137Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,255] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-pap | ssl.secure.random.implementation = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.724163677Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=692.107µs 23:16:59 policy-db-migrator | > upgrade 0770-toscarequirement.sql 23:16:59 kafka | [2024-04-29 23:15:03,260] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.72818316Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,261] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 23:16:59 policy-pap | ssl.truststore.certificates = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.728306361Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=124.711µs 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 23:16:59 kafka | [2024-04-29 23:15:03,261] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-pap | ssl.truststore.location = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.733810118Z level=info msg="Executing migration" id=create_alert_configuration_history_table 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,263] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-pap | ssl.truststore.password = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.735970556Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=2.160428ms 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,270] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-pap | ssl.truststore.type = JKS 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.74120892Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 23:16:59 policy-db-migrator | 23:16:59 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:59 kafka | [2024-04-29 23:15:03,270] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.742089117Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=880.477µs 23:16:59 policy-db-migrator | > upgrade 0780-toscarequirements.sql 23:16:59 policy-pap | 23:16:59 kafka | [2024-04-29 23:15:03,271] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.745859479Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | [2024-04-29T23:15:02.362+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:59 kafka | [2024-04-29 23:15:03,273] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.746227763Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 23:16:59 policy-pap | [2024-04-29T23:15:02.362+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:59 kafka | [2024-04-29 23:15:03,273] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.751437926Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | [2024-04-29T23:15:02.362+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714432502362 23:16:59 kafka | [2024-04-29 23:15:03,282] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.75178526Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=347.424µs 23:16:59 policy-db-migrator | 23:16:59 policy-pap | [2024-04-29T23:15:02.362+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3, groupId=138f9fa3-ce1b-405c-9d22-e6763c020d7f] Subscribed to topic(s): policy-pdp-pap 23:16:59 kafka | [2024-04-29 23:15:03,283] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.756136176Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 23:16:59 policy-db-migrator | 23:16:59 policy-pap | [2024-04-29T23:15:02.363+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 23:16:59 kafka | [2024-04-29 23:15:03,283] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.757082473Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=944.427µs 23:16:59 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 23:16:59 policy-pap | [2024-04-29T23:15:02.363+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=61090288-88b1-492f-a004-3449c9445940, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4270705f 23:16:59 kafka | [2024-04-29 23:15:03,283] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.761008127Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | [2024-04-29T23:15:02.363+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=61090288-88b1-492f-a004-3449c9445940, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:59 kafka | [2024-04-29 23:15:03,283] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.769416628Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=8.407831ms 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:59 policy-pap | [2024-04-29T23:15:02.364+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:59 kafka | [2024-04-29 23:15:03,290] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.774581372Z level=info msg="Executing migration" id="create library_element table v1" 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | allow.auto.create.topics = true 23:16:59 kafka | [2024-04-29 23:15:03,290] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.775293678Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=710.926µs 23:16:59 policy-db-migrator | 23:16:59 policy-pap | auto.commit.interval.ms = 5000 23:16:59 kafka | [2024-04-29 23:15:03,290] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.779281231Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 23:16:59 policy-db-migrator | 23:16:59 policy-pap | auto.include.jmx.reporter = true 23:16:59 kafka | [2024-04-29 23:15:03,290] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.780975585Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.693534ms 23:16:59 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 23:16:59 policy-pap | auto.offset.reset = latest 23:16:59 kafka | [2024-04-29 23:15:03,291] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.785656065Z level=info msg="Executing migration" id="create library_element_connection table v1" 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | bootstrap.servers = [kafka:9092] 23:16:59 kafka | [2024-04-29 23:15:03,298] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.787067927Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.411692ms 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 23:16:59 policy-pap | check.crcs = true 23:16:59 kafka | [2024-04-29 23:15:03,298] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.792543253Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:59 kafka | [2024-04-29 23:15:03,298] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.793562701Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.019338ms 23:16:59 policy-db-migrator | 23:16:59 policy-pap | client.id = consumer-policy-pap-4 23:16:59 kafka | [2024-04-29 23:15:03,298] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.800825403Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 23:16:59 policy-db-migrator | 23:16:59 policy-pap | client.rack = 23:16:59 kafka | [2024-04-29 23:15:03,299] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.801833931Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.007988ms 23:16:59 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 23:16:59 policy-pap | connections.max.idle.ms = 540000 23:16:59 kafka | [2024-04-29 23:15:03,306] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.807473808Z level=info msg="Executing migration" id="increase max description length to 2048" 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | default.api.timeout.ms = 60000 23:16:59 kafka | [2024-04-29 23:15:03,306] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.807533159Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=42.261µs 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:59 policy-pap | enable.auto.commit = true 23:16:59 kafka | [2024-04-29 23:15:03,306] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.815202674Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | exclude.internal.topics = true 23:16:59 kafka | [2024-04-29 23:15:03,306] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.815264724Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=63.01µs 23:16:59 policy-db-migrator | 23:16:59 policy-pap | fetch.max.bytes = 52428800 23:16:59 kafka | [2024-04-29 23:15:03,307] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.82434538Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 23:16:59 policy-db-migrator | 23:16:59 policy-pap | fetch.max.wait.ms = 500 23:16:59 kafka | [2024-04-29 23:15:03,317] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.824574273Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=231.093µs 23:16:59 policy-db-migrator | > upgrade 0820-toscatrigger.sql 23:16:59 policy-pap | fetch.min.bytes = 1 23:16:59 kafka | [2024-04-29 23:15:03,317] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.826513569Z level=info msg="Executing migration" id="create data_keys table" 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | group.id = policy-pap 23:16:59 kafka | [2024-04-29 23:15:03,317] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.827264136Z level=info msg="Migration successfully executed" id="create data_keys table" duration=750.407µs 23:16:59 policy-pap | group.instance.id = null 23:16:59 kafka | [2024-04-29 23:15:03,317] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.829613645Z level=info msg="Executing migration" id="create secrets table" 23:16:59 policy-pap | heartbeat.interval.ms = 3000 23:16:59 kafka | [2024-04-29 23:15:03,318] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.830223331Z level=info msg="Migration successfully executed" id="create secrets table" duration=609.596µs 23:16:59 policy-pap | interceptor.classes = [] 23:16:59 kafka | [2024-04-29 23:15:03,325] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.835160102Z level=info msg="Executing migration" id="rename data_keys name column to id" 23:16:59 policy-pap | internal.leave.group.on.close = true 23:16:59 kafka | [2024-04-29 23:15:03,326] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.873456595Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=38.299253ms 23:16:59 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:59 kafka | [2024-04-29 23:15:03,326] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.877088685Z level=info msg="Executing migration" id="add name column into data_keys" 23:16:59 policy-pap | isolation.level = read_uncommitted 23:16:59 kafka | [2024-04-29 23:15:03,326] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.882468661Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.379536ms 23:16:59 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:59 kafka | [2024-04-29 23:15:03,326] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.88601707Z level=info msg="Executing migration" id="copy data_keys id column values into name" 23:16:59 policy-pap | max.partition.fetch.bytes = 1048576 23:16:59 kafka | [2024-04-29 23:15:03,334] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.886185253Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=168.233µs 23:16:59 policy-pap | max.poll.interval.ms = 300000 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,334] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.891657848Z level=info msg="Executing migration" id="rename data_keys name column to label" 23:16:59 policy-pap | max.poll.records = 500 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,334] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.927200498Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=35.53986ms 23:16:59 policy-pap | metadata.max.age.ms = 300000 23:16:59 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 23:16:59 kafka | [2024-04-29 23:15:03,334] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.930844988Z level=info msg="Executing migration" id="rename data_keys id column back to name" 23:16:59 policy-pap | metric.reporters = [] 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,334] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.970102329Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=39.258161ms 23:16:59 policy-pap | metrics.num.samples = 2 23:16:59 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 23:16:59 kafka | [2024-04-29 23:15:03,342] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.973324667Z level=info msg="Executing migration" id="create kv_store table v1" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,342] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-pap | metrics.recording.level = INFO 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,342] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.974440746Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.116219ms 23:16:59 policy-pap | metrics.sample.window.ms = 30000 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,342] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.98917112Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 23:16:59 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:59 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 23:16:59 kafka | [2024-04-29 23:15:03,342] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.990424141Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.235961ms 23:16:59 policy-pap | receive.buffer.bytes = 65536 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,350] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.993604148Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 23:16:59 policy-pap | reconnect.backoff.max.ms = 1000 23:16:59 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 23:16:59 kafka | [2024-04-29 23:15:03,351] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.99382907Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=223.102µs 23:16:59 policy-pap | reconnect.backoff.ms = 50 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,351] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.997076237Z level=info msg="Executing migration" id="create permission table" 23:16:59 policy-pap | request.timeout.ms = 30000 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,351] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:29.997915224Z level=info msg="Migration successfully executed" id="create permission table" duration=838.557µs 23:16:59 policy-pap | retry.backoff.ms = 100 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,351] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-pap | sasl.client.callback.handler.class = null 23:16:59 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:59 kafka | [2024-04-29 23:15:03,360] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.004071696Z level=info msg="Executing migration" id="add unique index permission.role_id" 23:16:59 policy-pap | sasl.jaas.config = null 23:16:59 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:59 kafka | [2024-04-29 23:15:03,362] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.005014244Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=944.188µs 23:16:59 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:59 kafka | [2024-04-29 23:15:03,362] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.008480019Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | sasl.kerberos.service.name = null 23:16:59 kafka | [2024-04-29 23:15:03,362] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.009474676Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=994.577µs 23:16:59 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 23:16:59 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:59 kafka | [2024-04-29 23:15:03,362] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.012630537Z level=info msg="Executing migration" id="create role table" 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:59 kafka | [2024-04-29 23:15:03,371] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.014150265Z level=info msg="Migration successfully executed" id="create role table" duration=1.519528ms 23:16:59 policy-db-migrator | 23:16:59 policy-pap | sasl.login.callback.handler.class = null 23:16:59 kafka | [2024-04-29 23:15:03,372] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.0255244Z level=info msg="Executing migration" id="add column display_name" 23:16:59 policy-db-migrator | 23:16:59 policy-pap | sasl.login.class = null 23:16:59 kafka | [2024-04-29 23:15:03,372] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.032860338Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.335198ms 23:16:59 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:59 policy-pap | sasl.login.connect.timeout.ms = null 23:16:59 kafka | [2024-04-29 23:15:03,372] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.037570931Z level=info msg="Executing migration" id="add column group_name" 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | sasl.login.read.timeout.ms = null 23:16:59 kafka | [2024-04-29 23:15:03,372] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.042692579Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.121388ms 23:16:59 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 23:16:59 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:59 kafka | [2024-04-29 23:15:03,382] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.050370802Z level=info msg="Executing migration" id="add index role.org_id" 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:59 kafka | [2024-04-29 23:15:03,383] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.051332674Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=961.822µs 23:16:59 policy-db-migrator | 23:16:59 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:59 kafka | [2024-04-29 23:15:03,383] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.060608699Z level=info msg="Executing migration" id="add unique index role_org_id_name" 23:16:59 policy-db-migrator | 23:16:59 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:59 kafka | [2024-04-29 23:15:03,383] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.061897395Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.288636ms 23:16:59 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:59 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:59 kafka | [2024-04-29 23:15:03,383] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.071669666Z level=info msg="Executing migration" id="add index role_org_id_uid" 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:59 kafka | [2024-04-29 23:15:03,396] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.073172336Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.50224ms 23:16:59 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 23:16:59 policy-pap | sasl.mechanism = GSSAPI 23:16:59 kafka | [2024-04-29 23:15:03,396] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.082591712Z level=info msg="Executing migration" id="create team role table" 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:59 kafka | [2024-04-29 23:15:03,397] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.083407123Z level=info msg="Migration successfully executed" id="create team role table" duration=815.181µs 23:16:59 policy-db-migrator | 23:16:59 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:59 kafka | [2024-04-29 23:15:03,397] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.094558212Z level=info msg="Executing migration" id="add index team_role.org_id" 23:16:59 policy-db-migrator | 23:16:59 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:59 kafka | [2024-04-29 23:15:03,397] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.096908173Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=2.351541ms 23:16:59 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:59 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:59 kafka | [2024-04-29 23:15:03,407] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.107993791Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.109246448Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.255497ms 23:16:59 kafka | [2024-04-29 23:15:03,407] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 23:16:59 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.11386967Z level=info msg="Executing migration" id="add index team_role.team_id" 23:16:59 kafka | [2024-04-29 23:15:03,407] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.115779024Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.907114ms 23:16:59 kafka | [2024-04-29 23:15:03,407] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.123060062Z level=info msg="Executing migration" id="create user role table" 23:16:59 kafka | [2024-04-29 23:15:03,407] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.124206518Z level=info msg="Migration successfully executed" id="create user role table" duration=1.147686ms 23:16:59 kafka | [2024-04-29 23:15:03,414] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:59 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.128822199Z level=info msg="Executing migration" id="add index user_role.org_id" 23:16:59 kafka | [2024-04-29 23:15:03,415] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | security.protocol = PLAINTEXT 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.129784931Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=962.812µs 23:16:59 kafka | [2024-04-29 23:15:03,415] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 23:16:59 policy-pap | security.providers = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.134590526Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 23:16:59 kafka | [2024-04-29 23:15:03,415] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | send.buffer.bytes = 131072 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.13557979Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=989.194µs 23:16:59 kafka | [2024-04-29 23:15:03,415] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | session.timeout.ms = 45000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.141015402Z level=info msg="Executing migration" id="add index user_role.user_id" 23:16:59 kafka | [2024-04-29 23:15:03,422] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.142280759Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.269557ms 23:16:59 kafka | [2024-04-29 23:15:03,422] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:59 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.145855126Z level=info msg="Executing migration" id="create builtin role table" 23:16:59 kafka | [2024-04-29 23:15:03,422] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | ssl.cipher.suites = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.14685628Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.001434ms 23:16:59 kafka | [2024-04-29 23:15:03,422] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 23:16:59 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.150564299Z level=info msg="Executing migration" id="add index builtin_role.role_id" 23:16:59 kafka | [2024-04-29 23:15:03,422] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.151637613Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.073204ms 23:16:59 kafka | [2024-04-29 23:15:03,436] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | ssl.engine.factory.class = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.156018822Z level=info msg="Executing migration" id="add index builtin_role.name" 23:16:59 kafka | [2024-04-29 23:15:03,436] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | ssl.key.password = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.157602183Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.582001ms 23:16:59 kafka | [2024-04-29 23:15:03,436] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 23:16:59 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.161382864Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 23:16:59 kafka | [2024-04-29 23:15:03,436] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | ssl.keystore.certificate.chain = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.170973912Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=9.586078ms 23:16:59 kafka | [2024-04-29 23:15:03,437] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 23:16:59 policy-pap | ssl.keystore.key = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.175917588Z level=info msg="Executing migration" id="add index builtin_role.org_id" 23:16:59 kafka | [2024-04-29 23:15:03,445] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | ssl.keystore.location = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.17681218Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=897.101µs 23:16:59 kafka | [2024-04-29 23:15:03,445] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | ssl.keystore.password = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.183656311Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 23:16:59 kafka | [2024-04-29 23:15:03,445] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | ssl.keystore.type = JKS 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.185399144Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.743763ms 23:16:59 kafka | [2024-04-29 23:15:03,445] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 23:16:59 policy-pap | ssl.protocol = TLSv1.3 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.189283417Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 23:16:59 kafka | [2024-04-29 23:15:03,445] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | ssl.provider = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.190973499Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.691682ms 23:16:59 kafka | [2024-04-29 23:15:03,452] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 23:16:59 policy-pap | ssl.secure.random.implementation = null 23:16:59 kafka | [2024-04-29 23:15:03,453] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.194726639Z level=info msg="Executing migration" id="add unique index role.uid" 23:16:59 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:59 kafka | [2024-04-29 23:15:03,453] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.195819563Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.094144ms 23:16:59 policy-pap | ssl.truststore.certificates = null 23:16:59 kafka | [2024-04-29 23:15:03,453] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.201418998Z level=info msg="Executing migration" id="create seed assignment table" 23:16:59 policy-pap | ssl.truststore.location = null 23:16:59 kafka | [2024-04-29 23:15:03,453] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.202723996Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.304538ms 23:16:59 policy-pap | ssl.truststore.password = null 23:16:59 kafka | [2024-04-29 23:15:03,459] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.206550747Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 23:16:59 policy-pap | ssl.truststore.type = JKS 23:16:59 kafka | [2024-04-29 23:15:03,460] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.208299971Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.748754ms 23:16:59 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:59 kafka | [2024-04-29 23:15:03,460] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.212244323Z level=info msg="Executing migration" id="add column hidden to role table" 23:16:59 policy-pap | 23:16:59 kafka | [2024-04-29 23:15:03,460] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.220338981Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.091278ms 23:16:59 policy-pap | [2024-04-29T23:15:02.368+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:59 kafka | [2024-04-29 23:15:03,460] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.228090125Z level=info msg="Executing migration" id="permission kind migration" 23:16:59 policy-pap | [2024-04-29T23:15:02.368+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:59 kafka | [2024-04-29 23:15:03,466] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.243583721Z level=info msg="Migration successfully executed" id="permission kind migration" duration=15.528667ms 23:16:59 policy-pap | [2024-04-29T23:15:02.368+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714432502368 23:16:59 kafka | [2024-04-29 23:15:03,467] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.249163486Z level=info msg="Executing migration" id="permission attribute migration" 23:16:59 policy-pap | [2024-04-29T23:15:02.369+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:59 kafka | [2024-04-29 23:15:03,467] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.260453586Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=11.2942ms 23:16:59 policy-pap | [2024-04-29T23:15:02.369+00:00|INFO|ServiceManager|main] Policy PAP starting topics 23:16:59 kafka | [2024-04-29 23:15:03,467] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.264085715Z level=info msg="Executing migration" id="permission identifier migration" 23:16:59 policy-pap | [2024-04-29T23:15:02.369+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=61090288-88b1-492f-a004-3449c9445940, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:59 kafka | [2024-04-29 23:15:03,467] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.270056515Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.971541ms 23:16:59 policy-pap | [2024-04-29T23:15:02.369+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=138f9fa3-ce1b-405c-9d22-e6763c020d7f, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:59 kafka | [2024-04-29 23:15:03,474] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.27346542Z level=info msg="Executing migration" id="add permission identifier index" 23:16:59 policy-pap | [2024-04-29T23:15:02.369+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=289b515d-4371-41eb-ad35-f517759e0af5, alive=false, publisher=null]]: starting 23:16:59 kafka | [2024-04-29 23:15:03,475] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.274688666Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.220676ms 23:16:59 policy-pap | [2024-04-29T23:15:02.385+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:59 kafka | [2024-04-29 23:15:03,475] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.279088945Z level=info msg="Executing migration" id="add permission action scope role_id index" 23:16:59 policy-pap | acks = -1 23:16:59 kafka | [2024-04-29 23:15:03,475] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.280816658Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.726033ms 23:16:59 policy-pap | auto.include.jmx.reporter = true 23:16:59 kafka | [2024-04-29 23:15:03,475] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.285800155Z level=info msg="Executing migration" id="remove permission role_id action scope index" 23:16:59 policy-pap | batch.size = 16384 23:16:59 kafka | [2024-04-29 23:15:03,484] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.287418687Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.619012ms 23:16:59 policy-pap | bootstrap.servers = [kafka:9092] 23:16:59 kafka | [2024-04-29 23:15:03,485] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 kafka | [2024-04-29 23:15:03,485] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | buffer.memory = 33554432 23:16:59 kafka | [2024-04-29 23:15:03,485] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.293319775Z level=info msg="Executing migration" id="create query_history table v1" 23:16:59 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:59 kafka | [2024-04-29 23:15:03,485] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.294736184Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.416559ms 23:16:59 policy-pap | client.id = producer-1 23:16:59 kafka | [2024-04-29 23:15:03,493] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.299278265Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 23:16:59 policy-pap | compression.type = none 23:16:59 kafka | [2024-04-29 23:15:03,493] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.301545335Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=2.26878ms 23:16:59 policy-pap | connections.max.idle.ms = 540000 23:16:59 kafka | [2024-04-29 23:15:03,494] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.306162266Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 23:16:59 policy-pap | delivery.timeout.ms = 120000 23:16:59 kafka | [2024-04-29 23:15:03,494] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.30639955Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=238.084µs 23:16:59 policy-pap | enable.idempotence = true 23:16:59 kafka | [2024-04-29 23:15:03,494] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.311042622Z level=info msg="Executing migration" id="rbac disabled migrator" 23:16:59 policy-pap | interceptor.classes = [] 23:16:59 kafka | [2024-04-29 23:15:03,501] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.311162723Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=120.981µs 23:16:59 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:59 kafka | [2024-04-29 23:15:03,502] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:59 policy-pap | linger.ms = 0 23:16:59 kafka | [2024-04-29 23:15:03,502] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.314446197Z level=info msg="Executing migration" id="teams permissions migration" 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | max.block.ms = 60000 23:16:59 kafka | [2024-04-29 23:15:03,502] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.315074986Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=628.799µs 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,502] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.319617946Z level=info msg="Executing migration" id="dashboard permissions" 23:16:59 policy-pap | max.in.flight.requests.per.connection = 5 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,510] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.320373647Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=759.001µs 23:16:59 policy-pap | max.request.size = 1048576 23:16:59 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:59 kafka | [2024-04-29 23:15:03,510] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.323422888Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 23:16:59 policy-pap | metadata.max.age.ms = 300000 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,510] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.324194728Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=771.911µs 23:16:59 policy-pap | metadata.max.idle.ms = 300000 23:16:59 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:59 kafka | [2024-04-29 23:15:03,510] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.328938171Z level=info msg="Executing migration" id="drop managed folder create actions" 23:16:59 policy-pap | metric.reporters = [] 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,511] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.329287476Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=348.855µs 23:16:59 policy-pap | metrics.num.samples = 2 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,516] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.333217238Z level=info msg="Executing migration" id="alerting notification permissions" 23:16:59 policy-pap | metrics.recording.level = INFO 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,517] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.333837126Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=619.988µs 23:16:59 policy-pap | metrics.sample.window.ms = 30000 23:16:59 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:59 kafka | [2024-04-29 23:15:03,517] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.338173804Z level=info msg="Executing migration" id="create query_history_star table v1" 23:16:59 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.340133931Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.958687ms 23:16:59 policy-pap | partitioner.availability.timeout.ms = 0 23:16:59 kafka | [2024-04-29 23:15:03,517] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:59 policy-pap | partitioner.class = null 23:16:59 kafka | [2024-04-29 23:15:03,517] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.344513279Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.346616257Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=2.106999ms 23:16:59 kafka | [2024-04-29 23:15:03,524] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | partitioner.ignore.keys = false 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.351435152Z level=info msg="Executing migration" id="add column org_id in query_history_star" 23:16:59 kafka | [2024-04-29 23:15:03,524] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | receive.buffer.bytes = 32768 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.357537403Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=6.101112ms 23:16:59 kafka | [2024-04-29 23:15:03,524] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | reconnect.backoff.max.ms = 1000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.370982532Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 23:16:59 kafka | [2024-04-29 23:15:03,524] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:59 policy-pap | reconnect.backoff.ms = 50 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.371338787Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=357.225µs 23:16:59 kafka | [2024-04-29 23:15:03,525] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(IWsOBm1GS4OdGGC-w1lwlg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | request.timeout.ms = 30000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.378627374Z level=info msg="Executing migration" id="create correlation table v1" 23:16:59 kafka | [2024-04-29 23:15:03,532] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:59 policy-pap | retries = 2147483647 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.379715629Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.088195ms 23:16:59 kafka | [2024-04-29 23:15:03,533] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | retry.backoff.ms = 100 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.385425985Z level=info msg="Executing migration" id="add index correlations.uid" 23:16:59 kafka | [2024-04-29 23:15:03,533] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | sasl.client.callback.handler.class = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.38656071Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.134785ms 23:16:59 kafka | [2024-04-29 23:15:03,533] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | sasl.jaas.config = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.390235049Z level=info msg="Executing migration" id="add index correlations.source_uid" 23:16:59 kafka | [2024-04-29 23:15:03,533] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:59 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.391347114Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.111885ms 23:16:59 kafka | [2024-04-29 23:15:03,540] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.394659398Z level=info msg="Executing migration" id="add correlation config column" 23:16:59 kafka | [2024-04-29 23:15:03,541] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:59 policy-pap | sasl.kerberos.service.name = null 23:16:59 kafka | [2024-04-29 23:15:03,541] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.403121421Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.461433ms 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:59 kafka | [2024-04-29 23:15:03,541] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.407445239Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 23:16:59 policy-db-migrator | 23:16:59 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:59 kafka | [2024-04-29 23:15:03,541] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.408579364Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.134355ms 23:16:59 policy-db-migrator | 23:16:59 policy-pap | sasl.login.callback.handler.class = null 23:16:59 kafka | [2024-04-29 23:15:03,548] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.413780544Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 23:16:59 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:59 policy-pap | sasl.login.class = null 23:16:59 kafka | [2024-04-29 23:15:03,548] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.417140788Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=3.365505ms 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | sasl.login.connect.timeout.ms = null 23:16:59 kafka | [2024-04-29 23:15:03,548] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.421509657Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 23:16:59 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:59 policy-pap | sasl.login.read.timeout.ms = null 23:16:59 kafka | [2024-04-29 23:15:03,549] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.445109283Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=23.584705ms 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:59 kafka | [2024-04-29 23:15:03,549] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.449829036Z level=info msg="Executing migration" id="create correlation v2" 23:16:59 policy-db-migrator | 23:16:59 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:59 kafka | [2024-04-29 23:15:03,557] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.451031992Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.202466ms 23:16:59 policy-db-migrator | 23:16:59 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:59 kafka | [2024-04-29 23:15:03,558] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.454771172Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 23:16:59 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 23:16:59 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:59 kafka | [2024-04-29 23:15:03,558] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.455939767Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.168535ms 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:59 kafka | [2024-04-29 23:15:03,558] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.461580103Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:59 kafka | [2024-04-29 23:15:03,559] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.462777339Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.198406ms 23:16:59 policy-db-migrator | 23:16:59 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:59 kafka | [2024-04-29 23:15:03,565] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | sasl.mechanism = GSSAPI 23:16:59 kafka | [2024-04-29 23:15:03,566] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.468667687Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 23:16:59 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:59 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:59 kafka | [2024-04-29 23:15:03,566] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.471311623Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=2.647426ms 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:59 kafka | [2024-04-29 23:15:03,566] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.47556071Z level=info msg="Executing migration" id="copy correlation v1 to v2" 23:16:59 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:59 kafka | [2024-04-29 23:15:03,566] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.475841253Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=281.033µs 23:16:59 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:59 kafka | [2024-04-29 23:15:03,576] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.479114087Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 23:16:59 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:59 kafka | [2024-04-29 23:15:03,577] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.479866107Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=752.24µs 23:16:59 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:59 kafka | [2024-04-29 23:15:03,577] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.484326517Z level=info msg="Executing migration" id="add provisioning column" 23:16:59 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:59 kafka | [2024-04-29 23:15:03,578] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | > upgrade 0100-pdp.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.492439456Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.113189ms 23:16:59 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:59 kafka | [2024-04-29 23:15:03,578] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.495882411Z level=info msg="Executing migration" id="create entity_events table" 23:16:59 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:59 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 23:16:59 kafka | [2024-04-29 23:15:03,586] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.496503569Z level=info msg="Migration successfully executed" id="create entity_events table" duration=620.988µs 23:16:59 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,586] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.50320766Z level=info msg="Executing migration" id="create dashboard public config v1" 23:16:59 policy-pap | security.protocol = PLAINTEXT 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,587] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.505829015Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=2.642965ms 23:16:59 policy-pap | security.providers = null 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,587] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.509775007Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:59 policy-pap | send.buffer.bytes = 131072 23:16:59 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:59 kafka | [2024-04-29 23:15:03,587] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.510094691Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:59 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,593] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.5129982Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:59 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:59 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:59 kafka | [2024-04-29 23:15:03,594] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.513311884Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:59 policy-pap | ssl.cipher.suites = null 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,594] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.518355042Z level=info msg="Executing migration" id="Drop old dashboard public config table" 23:16:59 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,594] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.519131862Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=777.21µs 23:16:59 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,594] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.522735231Z level=info msg="Executing migration" id="recreate dashboard public config v1" 23:16:59 policy-pap | ssl.engine.factory.class = null 23:16:59 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 23:16:59 kafka | [2024-04-29 23:15:03,600] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.523749674Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.014513ms 23:16:59 policy-pap | ssl.key.password = null 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,601] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.5286153Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 23:16:59 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:59 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:59 kafka | [2024-04-29 23:15:03,601] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.530580436Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.969147ms 23:16:59 policy-pap | ssl.keystore.certificate.chain = null 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,601] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.534292545Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:59 policy-pap | ssl.keystore.key = null 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,601] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.53542112Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.128825ms 23:16:59 policy-pap | ssl.keystore.location = null 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,607] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.539800839Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 23:16:59 policy-pap | ssl.keystore.password = null 23:16:59 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 23:16:59 kafka | [2024-04-29 23:15:03,608] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.540836433Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.035274ms 23:16:59 policy-pap | ssl.keystore.type = JKS 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,608] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.544951588Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:59 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 23:16:59 kafka | [2024-04-29 23:15:03,608] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-pap | ssl.protocol = TLSv1.3 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.545925531Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=974.063µs 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,608] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-pap | ssl.provider = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.550733545Z level=info msg="Executing migration" id="Drop public config table" 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,616] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-pap | ssl.secure.random.implementation = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.551480745Z level=info msg="Migration successfully executed" id="Drop public config table" duration=750.87µs 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,616] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.556123137Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 23:16:59 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 23:16:59 kafka | [2024-04-29 23:15:03,616] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 23:16:59 policy-pap | ssl.truststore.certificates = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.556980189Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=856.802µs 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,617] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-pap | ssl.truststore.location = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.562894538Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 23:16:59 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 23:16:59 kafka | [2024-04-29 23:15:03,617] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-pap | ssl.truststore.password = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.563949112Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.054894ms 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,624] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-pap | ssl.truststore.type = JKS 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.567699342Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,625] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-pap | transaction.timeout.ms = 60000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.568868478Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.168996ms 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,625] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 23:16:59 policy-pap | transactional.id = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.576317197Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 23:16:59 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 23:16:59 kafka | [2024-04-29 23:15:03,625] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,625] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.577556303Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.240186ms 23:16:59 policy-pap | 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,632] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.581330555Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 23:16:59 policy-pap | [2024-04-29T23:15:02.396+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,633] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.604870009Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=23.538244ms 23:16:59 policy-pap | [2024-04-29T23:15:02.411+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:59 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 23:16:59 kafka | [2024-04-29 23:15:03,633] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.608272605Z level=info msg="Executing migration" id="add annotations_enabled column" 23:16:59 policy-pap | [2024-04-29T23:15:02.411+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,633] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.614367666Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.095632ms 23:16:59 policy-pap | [2024-04-29T23:15:02.411+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714432502411 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.618730634Z level=info msg="Executing migration" id="add time_selection_enabled column" 23:16:59 kafka | [2024-04-29 23:15:03,633] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 23:16:59 policy-pap | [2024-04-29T23:15:02.411+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=289b515d-4371-41eb-ad35-f517759e0af5, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.627057436Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.325862ms 23:16:59 kafka | [2024-04-29 23:15:03,640] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | [2024-04-29T23:15:02.411+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=56498fa9-7391-4309-9ab9-b0e25a5eb014, alive=false, publisher=null]]: starting 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.63116526Z level=info msg="Executing migration" id="delete orphaned public dashboards" 23:16:59 kafka | [2024-04-29 23:15:03,641] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | [2024-04-29T23:15:02.412+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.631386414Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=221.374µs 23:16:59 kafka | [2024-04-29 23:15:03,641] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | acks = -1 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.63480936Z level=info msg="Executing migration" id="add share column" 23:16:59 kafka | [2024-04-29 23:15:03,641] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 23:16:59 policy-pap | auto.include.jmx.reporter = true 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.642971169Z level=info msg="Migration successfully executed" id="add share column" duration=8.161289ms 23:16:59 kafka | [2024-04-29 23:15:03,641] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | batch.size = 16384 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.648303751Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 23:16:59 kafka | [2024-04-29 23:15:03,651] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 23:16:59 policy-pap | bootstrap.servers = [kafka:9092] 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.648491573Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=188.642µs 23:16:59 kafka | [2024-04-29 23:15:03,652] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | buffer.memory = 33554432 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.651210909Z level=info msg="Executing migration" id="create file table" 23:16:59 kafka | [2024-04-29 23:15:03,652] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.652272583Z level=info msg="Migration successfully executed" id="create file table" duration=1.058554ms 23:16:59 kafka | [2024-04-29 23:15:03,652] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | client.id = producer-2 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.655642608Z level=info msg="Executing migration" id="file table idx: path natural pk" 23:16:59 kafka | [2024-04-29 23:15:03,652] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 23:16:59 policy-pap | compression.type = none 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.657452633Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.810124ms 23:16:59 kafka | [2024-04-29 23:15:03,661] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | connections.max.idle.ms = 540000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.661997963Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 23:16:59 kafka | [2024-04-29 23:15:03,661] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:59 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 23:16:59 policy-pap | delivery.timeout.ms = 120000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.663346062Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.347849ms 23:16:59 kafka | [2024-04-29 23:15:03,661] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | JOIN pdpstatistics b 23:16:59 policy-pap | enable.idempotence = true 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.66765924Z level=info msg="Executing migration" id="create file_meta table" 23:16:59 kafka | [2024-04-29 23:15:03,661] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 23:16:59 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 23:16:59 policy-pap | interceptor.classes = [] 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.668613842Z level=info msg="Migration successfully executed" id="create file_meta table" duration=954.432µs 23:16:59 kafka | [2024-04-29 23:15:03,662] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(_u5Y4Qn_TSSHRzz95FvL9Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:59 policy-db-migrator | SET a.id = b.id 23:16:59 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.672678456Z level=info msg="Executing migration" id="file table idx: path key" 23:16:59 kafka | [2024-04-29 23:15:03,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | linger.ms = 0 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.673441947Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=764.221µs 23:16:59 kafka | [2024-04-29 23:15:03,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | max.block.ms = 60000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.677056305Z level=info msg="Executing migration" id="set path collation in file table" 23:16:59 kafka | [2024-04-29 23:15:03,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | max.in.flight.requests.per.connection = 5 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.677103455Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=47.58µs 23:16:59 kafka | [2024-04-29 23:15:03,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:59 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 23:16:59 policy-pap | max.request.size = 1048576 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.681160249Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 23:16:59 kafka | [2024-04-29 23:15:03,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | metadata.max.age.ms = 300000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.68122945Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=67.731µs 23:16:59 kafka | [2024-04-29 23:15:03,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:59 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 23:16:59 policy-pap | metadata.max.idle.ms = 300000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.684823199Z level=info msg="Executing migration" id="managed permissions migration" 23:16:59 kafka | [2024-04-29 23:15:03,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | metric.reporters = [] 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.685518758Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=695.309µs 23:16:59 kafka | [2024-04-29 23:15:03,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | metrics.num.samples = 2 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.688780872Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 23:16:59 kafka | [2024-04-29 23:15:03,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | metrics.recording.level = INFO 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.688946734Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=166.512µs 23:16:59 kafka | [2024-04-29 23:15:03,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:59 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 23:16:59 policy-pap | metrics.sample.window.ms = 30000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.691557149Z level=info msg="Executing migration" id="RBAC action name migrator" 23:16:59 kafka | [2024-04-29 23:15:03,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.692515371Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=957.812µs 23:16:59 kafka | [2024-04-29 23:15:03,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 23:16:59 policy-pap | partitioner.availability.timeout.ms = 0 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.696379993Z level=info msg="Executing migration" id="Add UID column to playlist" 23:16:59 kafka | [2024-04-29 23:15:03,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | partitioner.class = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.705446125Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.065972ms 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | partitioner.ignore.keys = false 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.709747792Z level=info msg="Executing migration" id="Update uid column values in playlist" 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | receive.buffer.bytes = 32768 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.709973585Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=224.253µs 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:59 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 23:16:59 policy-pap | reconnect.backoff.max.ms = 1000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.713902967Z level=info msg="Executing migration" id="Add index for uid in playlist" 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | reconnect.backoff.ms = 50 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.715152874Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.249227ms 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:59 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.719920408Z level=info msg="Executing migration" id="update group index for alert rules" 23:16:59 policy-pap | request.timeout.ms = 30000 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.720374954Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=455.276µs 23:16:59 policy-pap | retries = 2147483647 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.723712809Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 23:16:59 policy-pap | retry.backoff.ms = 100 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.723918822Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=205.913µs 23:16:59 policy-pap | sasl.client.callback.handler.class = null 23:16:59 policy-db-migrator | > upgrade 0210-sequence.sql 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.726847111Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 23:16:59 policy-pap | sasl.jaas.config = null 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.727645651Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=798.17µs 23:16:59 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.731808897Z level=info msg="Executing migration" id="add action column to seed_assignment" 23:16:59 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.74101152Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.208064ms 23:16:59 policy-pap | sasl.kerberos.service.name = null 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.745126125Z level=info msg="Executing migration" id="add scope column to seed_assignment" 23:16:59 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.752497274Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.370209ms 23:16:59 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:59 policy-db-migrator | > upgrade 0220-sequence.sql 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.755610155Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 23:16:59 policy-pap | sasl.login.callback.handler.class = null 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.75677265Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.162275ms 23:16:59 policy-pap | sasl.login.class = null 23:16:59 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:59 policy-pap | sasl.login.connect.timeout.ms = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.759666799Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:59 policy-pap | sasl.login.read.timeout.ms = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.832852878Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=73.183479ms 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:59 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.837888405Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:59 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.838737257Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=848.392µs 23:16:59 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:59 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.841615196Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:59 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.842468267Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=851.021µs 23:16:59 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:59 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.845562259Z level=info msg="Executing migration" id="add primary key to seed_assigment" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:59 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.869428338Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=23.86557ms 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:59 policy-pap | sasl.mechanism = GSSAPI 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.873811856Z level=info msg="Executing migration" id="add origin column to seed_assignment" 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:59 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.882746236Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=8.9339ms 23:16:59 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:59 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.887001353Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:59 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.887298637Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=297.584µs 23:16:59 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:59 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.891814037Z level=info msg="Executing migration" id="prevent seeding OnCall access" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:59 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.892114431Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=300.804µs 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:59 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.895570097Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:59 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.895822411Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=252.643µs 23:16:59 policy-db-migrator | > upgrade 0120-toscatrigger.sql 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:59 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.899079074Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:59 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.899283727Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=204.503µs 23:16:59 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:59 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.902831174Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:59 policy-pap | security.protocol = PLAINTEXT 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.903042158Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=210.674µs 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:59 policy-pap | security.providers = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.908709163Z level=info msg="Executing migration" id="create folder table" 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:59 policy-pap | send.buffer.bytes = 131072 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.910186113Z level=info msg="Migration successfully executed" id="create folder table" duration=1.47604ms 23:16:59 kafka | [2024-04-29 23:15:03,682] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:59 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 23:16:59 kafka | [2024-04-29 23:15:03,684] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.914177957Z level=info msg="Executing migration" id="Add index for parent_uid" 23:16:59 policy-pap | ssl.cipher.suites = null 23:16:59 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.915451973Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.274147ms 23:16:59 kafka | [2024-04-29 23:15:03,685] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.920200327Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 23:16:59 kafka | [2024-04-29 23:15:03,685] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.922228244Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=2.024356ms 23:16:59 kafka | [2024-04-29 23:15:03,685] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-pap | ssl.engine.factory.class = null 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.926077996Z level=info msg="Executing migration" id="Update folder title length" 23:16:59 kafka | [2024-04-29 23:15:03,685] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | ssl.key.password = null 23:16:59 policy-db-migrator | > upgrade 0140-toscaparameter.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.926115796Z level=info msg="Migration successfully executed" id="Update folder title length" duration=39.24µs 23:16:59 kafka | [2024-04-29 23:15:03,685] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.929364609Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 23:16:59 kafka | [2024-04-29 23:15:03,685] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | ssl.keystore.certificate.chain = null 23:16:59 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.930611796Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.244166ms 23:16:59 kafka | [2024-04-29 23:15:03,685] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-pap | ssl.keystore.key = null 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.933799809Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 23:16:59 kafka | [2024-04-29 23:15:03,685] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | ssl.keystore.location = null 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.935156857Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.356538ms 23:16:59 kafka | [2024-04-29 23:15:03,685] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-pap | ssl.keystore.password = null 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.938976537Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 23:16:59 kafka | [2024-04-29 23:15:03,685] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | ssl.keystore.type = JKS 23:16:59 policy-db-migrator | > upgrade 0150-toscaproperty.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.940357506Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.380539ms 23:16:59 kafka | [2024-04-29 23:15:03,685] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-pap | ssl.protocol = TLSv1.3 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,685] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | ssl.provider = null 23:16:59 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.94364573Z level=info msg="Executing migration" id="Sync dashboard and folder table" 23:16:59 kafka | [2024-04-29 23:15:03,685] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-pap | ssl.secure.random.implementation = null 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.944202808Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=556.688µs 23:16:59 kafka | [2024-04-29 23:15:03,685] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.946814933Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 23:16:59 kafka | [2024-04-29 23:15:03,686] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-pap | ssl.truststore.certificates = null 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.947124097Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=308.575µs 23:16:59 kafka | [2024-04-29 23:15:03,686] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | ssl.truststore.location = null 23:16:59 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.950980219Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 23:16:59 kafka | [2024-04-29 23:15:03,686] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-pap | ssl.truststore.password = null 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.952197624Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.217545ms 23:16:59 kafka | [2024-04-29 23:15:03,686] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | ssl.truststore.type = JKS 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.955716731Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 23:16:59 kafka | [2024-04-29 23:15:03,686] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-pap | transaction.timeout.ms = 60000 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.956908018Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.190947ms 23:16:59 kafka | [2024-04-29 23:15:03,686] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | transactional.id = null 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.959905008Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 23:16:59 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 23:16:59 kafka | [2024-04-29 23:15:03,686] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.961001642Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.096674ms 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,686] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.966005969Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,686] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-pap | [2024-04-29T23:15:02.413+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,686] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | [2024-04-29T23:15:02.416+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.967266016Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.260247ms 23:16:59 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 23:16:59 kafka | [2024-04-29 23:15:03,686] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-pap | [2024-04-29T23:15:02.416+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.97054792Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.971716846Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.167566ms 23:16:59 kafka | [2024-04-29 23:15:03,686] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | [2024-04-29T23:15:02.416+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714432502416 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.975979793Z level=info msg="Executing migration" id="create anon_device table" 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | [2024-04-29T23:15:02.416+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=56498fa9-7391-4309-9ab9-b0e25a5eb014, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.977146408Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.165925ms 23:16:59 kafka | [2024-04-29 23:15:03,686] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 23:16:59 policy-pap | [2024-04-29T23:15:02.416+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 23:16:59 kafka | [2024-04-29 23:15:03,686] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.98028678Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 23:16:59 policy-pap | [2024-04-29T23:15:02.416+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 23:16:59 kafka | [2024-04-29 23:15:03,686] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.981519517Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.232717ms 23:16:59 policy-pap | [2024-04-29T23:15:02.421+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 23:16:59 kafka | [2024-04-29 23:15:03,686] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.985309698Z level=info msg="Executing migration" id="add index anon_device.updated_at" 23:16:59 policy-pap | [2024-04-29T23:15:02.422+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.986507623Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.195185ms 23:16:59 policy-pap | [2024-04-29T23:15:02.425+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.990892402Z level=info msg="Executing migration" id="create signing_key table" 23:16:59 policy-pap | [2024-04-29T23:15:02.426+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.991989466Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.096564ms 23:16:59 policy-pap | [2024-04-29T23:15:02.426+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.995540865Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 23:16:59 policy-pap | [2024-04-29T23:15:02.427+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.996842662Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.301636ms 23:16:59 policy-pap | [2024-04-29T23:15:02.426+00:00|INFO|TimerManager|Thread-9] timer manager update started 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:30.999886042Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 23:16:59 policy-pap | [2024-04-29T23:15:02.427+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.001251391Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.368739ms 23:16:59 policy-pap | [2024-04-29T23:15:02.428+00:00|INFO|ServiceManager|main] Policy PAP started 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.005349946Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 23:16:59 policy-pap | [2024-04-29T23:15:02.429+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.272 seconds (process running for 9.841) 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.00568702Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=337.814µs 23:16:59 policy-pap | [2024-04-29T23:15:02.786+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3, groupId=138f9fa3-ce1b-405c-9d22-e6763c020d7f] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.009014889Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 23:16:59 policy-pap | [2024-04-29T23:15:02.787+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3, groupId=138f9fa3-ce1b-405c-9d22-e6763c020d7f] Cluster ID: 1q8HESR3R-yEc2qak37gtw 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.021248602Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=12.232853ms 23:16:59 policy-pap | [2024-04-29T23:15:02.792+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 1q8HESR3R-yEc2qak37gtw 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.024630621Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 23:16:59 policy-pap | [2024-04-29T23:15:02.794+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: 1q8HESR3R-yEc2qak37gtw 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.025345309Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=716.318µs 23:16:59 policy-pap | [2024-04-29T23:15:02.844+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.029100553Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 23:16:59 policy-pap | [2024-04-29T23:15:02.844+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: 1q8HESR3R-yEc2qak37gtw 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.030298427Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.197394ms 23:16:59 policy-pap | [2024-04-29T23:15:02.901+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3, groupId=138f9fa3-ce1b-405c-9d22-e6763c020d7f] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.033753256Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 23:16:59 policy-pap | [2024-04-29T23:15:02.917+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.035393156Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.64063ms 23:16:59 policy-pap | [2024-04-29T23:15:02.920+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.039084548Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 23:16:59 policy-pap | [2024-04-29T23:15:02.963+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.040718787Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.628459ms 23:16:59 policy-pap | [2024-04-29T23:15:03.034+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3, groupId=138f9fa3-ce1b-405c-9d22-e6763c020d7f] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.044862275Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 23:16:59 policy-pap | [2024-04-29T23:15:03.071+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:59 policy-db-migrator | > upgrade 0100-upgrade.sql 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.045983608Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.120853ms 23:16:59 policy-pap | [2024-04-29T23:15:03.142+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3, groupId=138f9fa3-ce1b-405c-9d22-e6763c020d7f] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.04873224Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 23:16:59 policy-pap | [2024-04-29T23:15:03.180+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:59 policy-db-migrator | select 'upgrade to 1100 completed' as msg 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.049813342Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.080842ms 23:16:59 policy-pap | [2024-04-29T23:15:03.247+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3, groupId=138f9fa3-ce1b-405c-9d22-e6763c020d7f] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.055604099Z level=info msg="Executing migration" id="create sso_setting table" 23:16:59 policy-pap | [2024-04-29T23:15:03.292+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.056667931Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.064262ms 23:16:59 policy-pap | [2024-04-29T23:15:03.361+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3, groupId=138f9fa3-ce1b-405c-9d22-e6763c020d7f] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:59 policy-db-migrator | msg 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.059858139Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 23:16:59 policy-pap | [2024-04-29T23:15:03.401+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:59 policy-db-migrator | upgrade to 1100 completed 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.061054032Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.194573ms 23:16:59 policy-pap | [2024-04-29T23:15:03.466+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3, groupId=138f9fa3-ce1b-405c-9d22-e6763c020d7f] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.064473362Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 23:16:59 policy-pap | [2024-04-29T23:15:03.505+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:59 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.064909327Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=436.724µs 23:16:59 policy-pap | [2024-04-29T23:15:03.571+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3, groupId=138f9fa3-ce1b-405c-9d22-e6763c020d7f] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,687] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.068192584Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 23:16:59 policy-pap | [2024-04-29T23:15:03.615+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:59 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.068256206Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=61.672µs 23:16:59 policy-pap | [2024-04-29T23:15:03.675+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3, groupId=138f9fa3-ce1b-405c-9d22-e6763c020d7f] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.07384628Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 23:16:59 policy-pap | [2024-04-29T23:15:03.725+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.083027887Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=9.183097ms 23:16:59 policy-pap | [2024-04-29T23:15:03.731+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.086247094Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 23:16:59 policy-pap | [2024-04-29T23:15:03.757+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-50560068-6271-4dcd-9a1a-dfae16161e40 23:16:59 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.09282194Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=6.573696ms 23:16:59 policy-pap | [2024-04-29T23:15:03.757+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.096114258Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 23:16:59 policy-pap | [2024-04-29T23:15:03.757+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:59 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.096429142Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=315.054µs 23:16:59 policy-pap | [2024-04-29T23:15:03.782+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3, groupId=138f9fa3-ce1b-405c-9d22-e6763c020d7f] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 grafana | logger=migrator t=2024-04-29T23:14:31.1005988Z level=info msg="migrations completed" performed=548 skipped=0 duration=3.805988238s 23:16:59 policy-pap | [2024-04-29T23:15:03.784+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3, groupId=138f9fa3-ce1b-405c-9d22-e6763c020d7f] (Re-)joining group 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 grafana | logger=sqlstore t=2024-04-29T23:14:31.110443244Z level=info msg="Created default admin" user=admin 23:16:59 policy-pap | [2024-04-29T23:15:03.790+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3, groupId=138f9fa3-ce1b-405c-9d22-e6763c020d7f] Request joining group due to: need to re-join with the given member-id: consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3-ed44512a-2f1c-4fa2-bc63-34a09c71ab3b 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 grafana | logger=sqlstore t=2024-04-29T23:14:31.110731507Z level=info msg="Created default organization" 23:16:59 policy-pap | [2024-04-29T23:15:03.790+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3, groupId=138f9fa3-ce1b-405c-9d22-e6763c020d7f] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:59 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 grafana | logger=secrets t=2024-04-29T23:14:31.114492271Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 23:16:59 policy-pap | [2024-04-29T23:15:03.790+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3, groupId=138f9fa3-ce1b-405c-9d22-e6763c020d7f] (Re-)joining group 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=plugin.store t=2024-04-29T23:14:31.133778794Z level=info msg="Loading plugins..." 23:16:59 policy-pap | [2024-04-29T23:15:06.781+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-50560068-6271-4dcd-9a1a-dfae16161e40', protocol='range'} 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 grafana | logger=local.finder t=2024-04-29T23:14:31.175852071Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 23:16:59 policy-pap | [2024-04-29T23:15:06.790+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-50560068-6271-4dcd-9a1a-dfae16161e40=Assignment(partitions=[policy-pdp-pap-0])} 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 grafana | logger=plugin.store t=2024-04-29T23:14:31.175879251Z level=info msg="Plugins loaded" count=55 duration=42.102147ms 23:16:59 policy-pap | [2024-04-29T23:15:06.795+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3, groupId=138f9fa3-ce1b-405c-9d22-e6763c020d7f] Successfully joined group with generation Generation{generationId=1, memberId='consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3-ed44512a-2f1c-4fa2-bc63-34a09c71ab3b', protocol='range'} 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 grafana | logger=query_data t=2024-04-29T23:14:31.187329214Z level=info msg="Query Service initialization" 23:16:59 policy-db-migrator | > upgrade 0120-audit_sequence.sql 23:16:59 policy-pap | [2024-04-29T23:15:06.796+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3, groupId=138f9fa3-ce1b-405c-9d22-e6763c020d7f] Finished assignment for group at generation 1: {consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3-ed44512a-2f1c-4fa2-bc63-34a09c71ab3b=Assignment(partitions=[policy-pdp-pap-0])} 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 grafana | logger=live.push_http t=2024-04-29T23:14:31.192862378Z level=info msg="Live Push Gateway initialization" 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | [2024-04-29T23:15:06.822+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-50560068-6271-4dcd-9a1a-dfae16161e40', protocol='range'} 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 grafana | logger=ngalert.migration t=2024-04-29T23:14:31.204674544Z level=info msg=Starting 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:59 policy-pap | [2024-04-29T23:15:06.823+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 grafana | logger=ngalert.migration t=2024-04-29T23:14:31.205306682Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | [2024-04-29T23:15:06.823+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3, groupId=138f9fa3-ce1b-405c-9d22-e6763c020d7f] Successfully synced group in generation Generation{generationId=1, memberId='consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3-ed44512a-2f1c-4fa2-bc63-34a09c71ab3b', protocol='range'} 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 grafana | logger=ngalert.migration orgID=1 t=2024-04-29T23:14:31.20599147Z level=info msg="Migrating alerts for organisation" 23:16:59 policy-db-migrator | 23:16:59 policy-pap | [2024-04-29T23:15:06.824+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3, groupId=138f9fa3-ce1b-405c-9d22-e6763c020d7f] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:59 grafana | logger=ngalert.migration orgID=1 t=2024-04-29T23:14:31.207032272Z level=info msg="Alerts found to migrate" alerts=0 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 grafana | logger=ngalert.migration t=2024-04-29T23:14:31.212634917Z level=info msg="Completed alerting migration" 23:16:59 policy-pap | [2024-04-29T23:15:06.826+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 23:16:59 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 grafana | logger=ngalert.state.manager t=2024-04-29T23:14:31.250751498Z level=info msg="Running in alternative execution of Error/NoData mode" 23:16:59 policy-pap | [2024-04-29T23:15:06.827+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3, groupId=138f9fa3-ce1b-405c-9d22-e6763c020d7f] Adding newly assigned partitions: policy-pdp-pap-0 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 grafana | logger=infra.usagestats.collector t=2024-04-29T23:14:31.254880016Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 23:16:59 policy-pap | [2024-04-29T23:15:06.849+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3, groupId=138f9fa3-ce1b-405c-9d22-e6763c020d7f] Found no committed offset for partition policy-pdp-pap-0 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 grafana | logger=provisioning.datasources t=2024-04-29T23:14:31.259703462Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 23:16:59 policy-pap | [2024-04-29T23:15:06.849+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 23:16:59 policy-db-migrator | 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 grafana | logger=provisioning.alerting t=2024-04-29T23:14:31.279075296Z level=info msg="starting to provision alerting" 23:16:59 policy-pap | [2024-04-29T23:15:06.864+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3, groupId=138f9fa3-ce1b-405c-9d22-e6763c020d7f] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:59 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 grafana | logger=provisioning.alerting t=2024-04-29T23:14:31.279115726Z level=info msg="finished to provision alerting" 23:16:59 policy-pap | [2024-04-29T23:15:06.865+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 grafana | logger=ngalert.state.manager t=2024-04-29T23:14:31.279712523Z level=info msg="Warming state cache for startup" 23:16:59 policy-pap | [2024-04-29T23:15:08.731+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:59 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 grafana | logger=grafanaStorageLogger t=2024-04-29T23:14:31.280615983Z level=info msg="Storage starting" 23:16:59 policy-pap | [2024-04-29T23:15:08.731+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 grafana | logger=ngalert.state.manager t=2024-04-29T23:14:31.280664384Z level=info msg="State cache has been initialized" states=0 duration=923.36µs 23:16:59 policy-pap | [2024-04-29T23:15:08.734+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 2 ms 23:16:59 policy-db-migrator | 23:16:59 policy-db-migrator | -------------- 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | [2024-04-29T23:15:23.918+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 23:16:59 grafana | logger=ngalert.multiorg.alertmanager t=2024-04-29T23:14:31.280742064Z level=info msg="Starting MultiOrg Alertmanager" 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-pap | [] 23:16:59 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:59 grafana | logger=ngalert.scheduler t=2024-04-29T23:14:31.280853936Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | [2024-04-29T23:15:23.919+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=ticker t=2024-04-29T23:14:31.280977167Z level=info msg=starting first_tick=2024-04-29T23:14:40Z 23:16:59 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"11f23ac5-5fc1-4033-99a8-bb731eb89470","timestampMs":1714432523879,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup"} 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=http.server t=2024-04-29T23:14:31.281489034Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 23:16:59 policy-pap | [2024-04-29T23:15:23.927+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=provisioning.dashboard t=2024-04-29T23:14:31.319945448Z level=info msg="starting to provision dashboards" 23:16:59 policy-pap | [2024-04-29T23:15:23.933+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | TRUNCATE TABLE sequence 23:16:59 grafana | logger=sqlstore.transactions t=2024-04-29T23:14:31.365891051Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:59 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"11f23ac5-5fc1-4033-99a8-bb731eb89470","timestampMs":1714432523879,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup"} 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=plugins.update.checker t=2024-04-29T23:14:31.369918657Z level=info msg="Update check succeeded" duration=89.675678ms 23:16:59 policy-pap | [2024-04-29T23:15:24.008+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpUpdate starting 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=sqlstore.transactions t=2024-04-29T23:14:31.386130035Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 23:16:59 policy-pap | [2024-04-29T23:15:24.008+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpUpdate starting listener 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=sqlstore.transactions t=2024-04-29T23:14:31.397971261Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 code="database is locked" 23:16:59 policy-pap | [2024-04-29T23:15:24.008+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpUpdate starting timer 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 23:16:59 grafana | logger=grafana.update.checker t=2024-04-29T23:14:31.402518704Z level=info msg="Update check succeeded" duration=122.786731ms 23:16:59 policy-pap | [2024-04-29T23:15:24.009+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=7edbd1fb-1d5c-4d52-8493-26ac0c4382f1, expireMs=1714432554009] 23:16:59 kafka | [2024-04-29 23:15:03,688] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=sqlstore.transactions t=2024-04-29T23:14:31.408962409Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=3 code="database is locked" 23:16:59 policy-pap | [2024-04-29T23:15:24.010+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpUpdate starting enqueue 23:16:59 kafka | [2024-04-29 23:15:03,691] INFO [Broker id=1] Finished LeaderAndIsr request in 623ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:16:59 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 23:16:59 grafana | logger=sqlstore.transactions t=2024-04-29T23:14:31.416471966Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:59 policy-pap | [2024-04-29T23:15:24.011+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpUpdate started 23:16:59 kafka | [2024-04-29 23:15:03,692] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 7 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=sqlstore.transactions t=2024-04-29T23:14:31.429508316Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:59 policy-pap | [2024-04-29T23:15:24.011+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=7edbd1fb-1d5c-4d52-8493-26ac0c4382f1, expireMs=1714432554009] 23:16:59 kafka | [2024-04-29 23:15:03,694] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=sqlstore.transactions t=2024-04-29T23:14:31.463126056Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 23:16:59 policy-pap | [2024-04-29T23:15:24.013+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:59 kafka | [2024-04-29 23:15:03,694] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=sqlstore.transactions t=2024-04-29T23:14:31.463579062Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:59 policy-pap | {"source":"pap-84f9d567-fa59-4558-8d84-b060e7fa7b8f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"7edbd1fb-1d5c-4d52-8493-26ac0c4382f1","timestampMs":1714432523990,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:59 kafka | [2024-04-29 23:15:03,694] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | DROP TABLE pdpstatistics 23:16:59 grafana | logger=sqlstore.transactions t=2024-04-29T23:14:31.480449797Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 23:16:59 policy-pap | [2024-04-29T23:15:24.048+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:59 kafka | [2024-04-29 23:15:03,694] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=sqlstore.transactions t=2024-04-29T23:14:31.480486327Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:59 policy-pap | {"source":"pap-84f9d567-fa59-4558-8d84-b060e7fa7b8f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"7edbd1fb-1d5c-4d52-8493-26ac0c4382f1","timestampMs":1714432523990,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:59 kafka | [2024-04-29 23:15:03,695] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=provisioning.dashboard t=2024-04-29T23:14:31.642740605Z level=info msg="finished to provision dashboards" 23:16:59 policy-pap | [2024-04-29T23:15:24.049+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:59 kafka | [2024-04-29 23:15:03,695] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | 23:16:59 grafana | logger=grafana-apiserver t=2024-04-29T23:14:32.019414194Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 23:16:59 policy-pap | [2024-04-29T23:15:24.050+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:59 kafka | [2024-04-29 23:15:03,695] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:59 grafana | logger=grafana-apiserver t=2024-04-29T23:14:32.019844438Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 23:16:59 policy-pap | {"source":"pap-84f9d567-fa59-4558-8d84-b060e7fa7b8f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"7edbd1fb-1d5c-4d52-8493-26ac0c4382f1","timestampMs":1714432523990,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:59 kafka | [2024-04-29 23:15:03,695] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 grafana | logger=infra.usagestats t=2024-04-29T23:16:25.292204864Z level=info msg="Usage stats are ready to report" 23:16:59 policy-pap | [2024-04-29T23:15:24.050+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:59 kafka | [2024-04-29 23:15:03,695] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 23:16:59 policy-pap | [2024-04-29T23:15:24.075+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:59 kafka | [2024-04-29 23:15:03,695] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f2a8074f-0013-446c-a352-ca4fd5931c01","timestampMs":1714432524064,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup"} 23:16:59 kafka | [2024-04-29 23:15:03,695] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | [2024-04-29T23:15:24.081+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:59 kafka | [2024-04-29 23:15:03,695] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f2a8074f-0013-446c-a352-ca4fd5931c01","timestampMs":1714432524064,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup"} 23:16:59 kafka | [2024-04-29 23:15:03,696] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 23:16:59 policy-pap | [2024-04-29T23:15:24.082+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:59 kafka | [2024-04-29 23:15:03,696] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | [2024-04-29T23:15:24.087+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:59 kafka | [2024-04-29 23:15:03,696] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | DROP TABLE statistics_sequence 23:16:59 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"7edbd1fb-1d5c-4d52-8493-26ac0c4382f1","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"b71c8243-6027-4010-b6e7-510fa7dd1d94","timestampMs":1714432524068,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:59 kafka | [2024-04-29 23:15:03,696] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | -------------- 23:16:59 policy-pap | [2024-04-29T23:15:24.094+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpUpdate stopping 23:16:59 kafka | [2024-04-29 23:15:03,696] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | 23:16:59 policy-pap | [2024-04-29T23:15:24.094+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpUpdate stopping enqueue 23:16:59 kafka | [2024-04-29 23:15:03,696] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | policyadmin: OK: upgrade (1300) 23:16:59 policy-pap | [2024-04-29T23:15:24.095+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpUpdate stopping timer 23:16:59 kafka | [2024-04-29 23:15:03,696] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | name version 23:16:59 policy-pap | [2024-04-29T23:15:24.095+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=7edbd1fb-1d5c-4d52-8493-26ac0c4382f1, expireMs=1714432554009] 23:16:59 kafka | [2024-04-29 23:15:03,697] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | policyadmin 1300 23:16:59 kafka | [2024-04-29 23:15:03,697] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | ID script operation from_version to_version tag success atTime 23:16:59 policy-pap | [2024-04-29T23:15:24.095+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpUpdate stopping listener 23:16:59 kafka | [2024-04-29 23:15:03,697] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:34 23:16:59 policy-pap | [2024-04-29T23:15:24.095+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpUpdate stopped 23:16:59 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:34 23:16:59 kafka | [2024-04-29 23:15:03,697] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | [2024-04-29T23:15:24.099+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:59 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:34 23:16:59 kafka | [2024-04-29 23:15:03,697] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"7edbd1fb-1d5c-4d52-8493-26ac0c4382f1","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"b71c8243-6027-4010-b6e7-510fa7dd1d94","timestampMs":1714432524068,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:59 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:34 23:16:59 kafka | [2024-04-29 23:15:03,697] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | [2024-04-29T23:15:24.099+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 7edbd1fb-1d5c-4d52-8493-26ac0c4382f1 23:16:59 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:34 23:16:59 kafka | [2024-04-29 23:15:03,697] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | [2024-04-29T23:15:24.102+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpUpdate successful 23:16:59 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:34 23:16:59 kafka | [2024-04-29 23:15:03,697] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | [2024-04-29T23:15:24.102+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 start publishing next request 23:16:59 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:34 23:16:59 kafka | [2024-04-29 23:15:03,698] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | [2024-04-29T23:15:24.102+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpStateChange starting 23:16:59 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:34 23:16:59 kafka | [2024-04-29 23:15:03,698] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | [2024-04-29T23:15:24.102+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpStateChange starting listener 23:16:59 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:34 23:16:59 kafka | [2024-04-29 23:15:03,698] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | [2024-04-29T23:15:24.103+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpStateChange starting timer 23:16:59 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:34 23:16:59 kafka | [2024-04-29 23:15:03,698] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | [2024-04-29T23:15:24.103+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=7b662de9-702e-4d3b-a521-88c62a87dc66, expireMs=1714432554103] 23:16:59 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 kafka | [2024-04-29 23:15:03,698] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | [2024-04-29T23:15:24.103+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpStateChange starting enqueue 23:16:59 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 kafka | [2024-04-29 23:15:03,698] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | [2024-04-29T23:15:24.103+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=7b662de9-702e-4d3b-a521-88c62a87dc66, expireMs=1714432554103] 23:16:59 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 kafka | [2024-04-29 23:15:03,698] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | [2024-04-29T23:15:24.103+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpStateChange started 23:16:59 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 kafka | [2024-04-29 23:15:03,698] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | [2024-04-29T23:15:24.104+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:59 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 kafka | [2024-04-29 23:15:03,699] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | {"source":"pap-84f9d567-fa59-4558-8d84-b060e7fa7b8f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"7b662de9-702e-4d3b-a521-88c62a87dc66","timestampMs":1714432523991,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:59 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 kafka | [2024-04-29 23:15:03,699] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | [2024-04-29T23:15:24.114+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:59 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 kafka | [2024-04-29 23:15:03,699] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | {"source":"pap-84f9d567-fa59-4558-8d84-b060e7fa7b8f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"7b662de9-702e-4d3b-a521-88c62a87dc66","timestampMs":1714432523991,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:59 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 kafka | [2024-04-29 23:15:03,699] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | [2024-04-29T23:15:24.115+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 23:16:59 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 kafka | [2024-04-29 23:15:03,699] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | [2024-04-29T23:15:24.126+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:59 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 kafka | [2024-04-29 23:15:03,699] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"7b662de9-702e-4d3b-a521-88c62a87dc66","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"0ccb37a6-3abc-4fbc-a096-737e342c9174","timestampMs":1714432524115,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:59 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 kafka | [2024-04-29 23:15:03,699] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | [2024-04-29T23:15:24.127+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 7b662de9-702e-4d3b-a521-88c62a87dc66 23:16:59 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 kafka | [2024-04-29 23:15:03,699] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | [2024-04-29T23:15:24.132+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:59 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 kafka | [2024-04-29 23:15:03,699] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | {"source":"pap-84f9d567-fa59-4558-8d84-b060e7fa7b8f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"7b662de9-702e-4d3b-a521-88c62a87dc66","timestampMs":1714432523991,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:59 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 kafka | [2024-04-29 23:15:03,700] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | [2024-04-29T23:15:24.132+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 23:16:59 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 kafka | [2024-04-29 23:15:03,700] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-pap | [2024-04-29T23:15:24.134+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:59 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"7b662de9-702e-4d3b-a521-88c62a87dc66","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"0ccb37a6-3abc-4fbc-a096-737e342c9174","timestampMs":1714432524115,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:59 kafka | [2024-04-29 23:15:03,700] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=_u5Y4Qn_TSSHRzz95FvL9Q, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=IWsOBm1GS4OdGGC-w1lwlg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:59 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 policy-pap | [2024-04-29T23:15:24.135+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpStateChange stopping 23:16:59 kafka | [2024-04-29 23:15:03,700] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 policy-pap | [2024-04-29T23:15:24.135+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpStateChange stopping enqueue 23:16:59 kafka | [2024-04-29 23:15:03,701] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 policy-pap | [2024-04-29T23:15:24.135+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpStateChange stopping timer 23:16:59 kafka | [2024-04-29 23:15:03,701] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:59 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 policy-pap | [2024-04-29T23:15:24.135+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=7b662de9-702e-4d3b-a521-88c62a87dc66, expireMs=1714432554103] 23:16:59 kafka | [2024-04-29 23:15:03,706] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 policy-pap | [2024-04-29T23:15:24.135+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpStateChange stopping listener 23:16:59 kafka | [2024-04-29 23:15:03,706] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 policy-pap | [2024-04-29T23:15:24.135+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpStateChange stopped 23:16:59 kafka | [2024-04-29 23:15:03,706] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 policy-pap | [2024-04-29T23:15:24.135+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpStateChange successful 23:16:59 kafka | [2024-04-29 23:15:03,707] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 policy-pap | [2024-04-29T23:15:24.135+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 start publishing next request 23:16:59 kafka | [2024-04-29 23:15:03,707] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 policy-pap | [2024-04-29T23:15:24.135+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpUpdate starting 23:16:59 kafka | [2024-04-29 23:15:03,707] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 policy-pap | [2024-04-29T23:15:24.135+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpUpdate starting listener 23:16:59 kafka | [2024-04-29 23:15:03,707] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:35 23:16:59 policy-pap | [2024-04-29T23:15:24.135+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpUpdate starting timer 23:16:59 kafka | [2024-04-29 23:15:03,707] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 policy-pap | [2024-04-29T23:15:24.135+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=09959dc9-506c-4a36-a595-66f19d2e88a7, expireMs=1714432554135] 23:16:59 kafka | [2024-04-29 23:15:03,707] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 policy-pap | [2024-04-29T23:15:24.135+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpUpdate starting enqueue 23:16:59 kafka | [2024-04-29 23:15:03,707] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 policy-pap | [2024-04-29T23:15:24.135+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpUpdate started 23:16:59 kafka | [2024-04-29 23:15:03,707] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-pap | [2024-04-29T23:15:24.136+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:59 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 kafka | [2024-04-29 23:15:03,707] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-pap | {"source":"pap-84f9d567-fa59-4558-8d84-b060e7fa7b8f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"09959dc9-506c-4a36-a595-66f19d2e88a7","timestampMs":1714432524124,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:59 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 kafka | [2024-04-29 23:15:03,708] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-pap | [2024-04-29T23:15:24.144+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:59 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 kafka | [2024-04-29 23:15:03,708] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-pap | {"source":"pap-84f9d567-fa59-4558-8d84-b060e7fa7b8f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"09959dc9-506c-4a36-a595-66f19d2e88a7","timestampMs":1714432524124,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:59 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 kafka | [2024-04-29 23:15:03,708] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-pap | [2024-04-29T23:15:24.145+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:59 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 kafka | [2024-04-29 23:15:03,708] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-pap | [2024-04-29T23:15:24.146+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:59 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 kafka | [2024-04-29 23:15:03,708] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-pap | {"source":"pap-84f9d567-fa59-4558-8d84-b060e7fa7b8f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"09959dc9-506c-4a36-a595-66f19d2e88a7","timestampMs":1714432524124,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:59 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 kafka | [2024-04-29 23:15:03,708] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-pap | [2024-04-29T23:15:24.147+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:59 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 kafka | [2024-04-29 23:15:03,708] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-pap | [2024-04-29T23:15:24.154+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:59 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 kafka | [2024-04-29 23:15:03,708] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"09959dc9-506c-4a36-a595-66f19d2e88a7","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"f20e76c5-f318-4326-8046-cf76b3b0b2d1","timestampMs":1714432524146,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:59 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 kafka | [2024-04-29 23:15:03,708] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-pap | [2024-04-29T23:15:24.155+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 09959dc9-506c-4a36-a595-66f19d2e88a7 23:16:59 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 kafka | [2024-04-29 23:15:03,708] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-pap | [2024-04-29T23:15:24.155+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:59 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 kafka | [2024-04-29 23:15:03,708] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"09959dc9-506c-4a36-a595-66f19d2e88a7","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"f20e76c5-f318-4326-8046-cf76b3b0b2d1","timestampMs":1714432524146,"name":"apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:59 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 kafka | [2024-04-29 23:15:03,708] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-pap | [2024-04-29T23:15:24.156+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpUpdate stopping 23:16:59 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 kafka | [2024-04-29 23:15:03,709] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-pap | [2024-04-29T23:15:24.156+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpUpdate stopping enqueue 23:16:59 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 kafka | [2024-04-29 23:15:03,709] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-pap | [2024-04-29T23:15:24.156+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpUpdate stopping timer 23:16:59 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 kafka | [2024-04-29 23:15:03,709] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-pap | [2024-04-29T23:15:24.157+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=09959dc9-506c-4a36-a595-66f19d2e88a7, expireMs=1714432554135] 23:16:59 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 kafka | [2024-04-29 23:15:03,709] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-pap | [2024-04-29T23:15:24.157+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpUpdate stopping listener 23:16:59 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 kafka | [2024-04-29 23:15:03,709] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-pap | [2024-04-29T23:15:24.157+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpUpdate stopped 23:16:59 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 kafka | [2024-04-29 23:15:03,709] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-pap | [2024-04-29T23:15:24.161+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 PdpUpdate successful 23:16:59 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 kafka | [2024-04-29 23:15:03,709] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-pap | [2024-04-29T23:15:24.161+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-6ae00bfa-667e-4fbd-8e7c-4816dbb9e744 has no more requests 23:16:59 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 policy-pap | [2024-04-29T23:15:29.164+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. 23:16:59 kafka | [2024-04-29 23:15:03,709] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 policy-pap | [2024-04-29T23:15:29.225+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:59 kafka | [2024-04-29 23:15:03,709] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 policy-pap | [2024-04-29T23:15:29.241+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:59 kafka | [2024-04-29 23:15:03,709] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 policy-pap | [2024-04-29T23:15:29.243+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:59 kafka | [2024-04-29 23:15:03,709] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 kafka | [2024-04-29 23:15:03,709] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 policy-pap | [2024-04-29T23:15:29.651+00:00|INFO|SessionData|http-nio-6969-exec-6] unknown group testGroup 23:16:59 kafka | [2024-04-29 23:15:03,709] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:36 23:16:59 policy-pap | [2024-04-29T23:15:30.125+00:00|INFO|SessionData|http-nio-6969-exec-6] create cached group testGroup 23:16:59 kafka | [2024-04-29 23:15:03,709] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:37 23:16:59 policy-pap | [2024-04-29T23:15:30.126+00:00|INFO|SessionData|http-nio-6969-exec-6] creating DB group testGroup 23:16:59 kafka | [2024-04-29 23:15:03,710] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:37 23:16:59 policy-pap | [2024-04-29T23:15:30.634+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group testGroup 23:16:59 kafka | [2024-04-29 23:15:03,710] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:37 23:16:59 policy-pap | [2024-04-29T23:15:30.819+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy onap.restart.tca 1.0.0 23:16:59 kafka | [2024-04-29 23:15:03,710] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:37 23:16:59 policy-pap | [2024-04-29T23:15:30.904+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 23:16:59 kafka | [2024-04-29 23:15:03,710] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:37 23:16:59 policy-pap | [2024-04-29T23:15:30.905+00:00|INFO|SessionData|http-nio-6969-exec-10] update cached group testGroup 23:16:59 kafka | [2024-04-29 23:15:03,710] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:37 23:16:59 policy-pap | [2024-04-29T23:15:30.905+00:00|INFO|SessionData|http-nio-6969-exec-10] updating DB group testGroup 23:16:59 kafka | [2024-04-29 23:15:03,710] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:37 23:16:59 policy-pap | [2024-04-29T23:15:30.916+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-04-29T23:15:30Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-04-29T23:15:30Z, user=policyadmin)] 23:16:59 kafka | [2024-04-29 23:15:03,710] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:37 23:16:59 policy-pap | [2024-04-29T23:15:31.575+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group testGroup 23:16:59 kafka | [2024-04-29 23:15:03,710] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:37 23:16:59 policy-pap | [2024-04-29T23:15:31.576+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 23:16:59 kafka | [2024-04-29 23:15:03,710] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:37 23:16:59 policy-pap | [2024-04-29T23:15:31.576+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.restart.tca 1.0.0 23:16:59 kafka | [2024-04-29 23:15:03,710] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:37 23:16:59 policy-pap | [2024-04-29T23:15:31.576+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group testGroup 23:16:59 kafka | [2024-04-29 23:15:03,710] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:37 23:16:59 policy-pap | [2024-04-29T23:15:31.576+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group testGroup 23:16:59 kafka | [2024-04-29 23:15:03,710] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:37 23:16:59 policy-pap | [2024-04-29T23:15:31.586+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-29T23:15:31Z, user=policyadmin)] 23:16:59 kafka | [2024-04-29 23:15:03,710] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:37 23:16:59 policy-pap | [2024-04-29T23:15:31.899+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup 23:16:59 kafka | [2024-04-29 23:15:03,711] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:59 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:37 23:16:59 policy-pap | [2024-04-29T23:15:31.899+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 23:16:59 kafka | [2024-04-29 23:15:03,712] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:59 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:37 23:16:59 policy-pap | [2024-04-29T23:15:31.899+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 23:16:59 kafka | [2024-04-29 23:15:03,750] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-50560068-6271-4dcd-9a1a-dfae16161e40 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:37 23:16:59 policy-pap | [2024-04-29T23:15:31.899+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 23:16:59 kafka | [2024-04-29 23:15:03,765] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-50560068-6271-4dcd-9a1a-dfae16161e40 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:37 23:16:59 policy-pap | [2024-04-29T23:15:31.899+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 23:16:59 kafka | [2024-04-29 23:15:03,788] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 138f9fa3-ce1b-405c-9d22-e6763c020d7f in Empty state. Created a new member id consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3-ed44512a-2f1c-4fa2-bc63-34a09c71ab3b and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:37 23:16:59 policy-pap | [2024-04-29T23:15:31.899+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 23:16:59 kafka | [2024-04-29 23:15:03,792] INFO [GroupCoordinator 1]: Preparing to rebalance group 138f9fa3-ce1b-405c-9d22-e6763c020d7f in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3-ed44512a-2f1c-4fa2-bc63-34a09c71ab3b with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:37 23:16:59 policy-pap | [2024-04-29T23:15:31.908+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-29T23:15:31Z, user=policyadmin)] 23:16:59 kafka | [2024-04-29 23:15:04,214] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 085fa03c-d2d9-404c-b0e2-72bc2e06aca2 in Empty state. Created a new member id consumer-085fa03c-d2d9-404c-b0e2-72bc2e06aca2-2-0386bef5-4364-47bf-87f4-bb00f58168cb and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:37 23:16:59 policy-pap | [2024-04-29T23:15:52.473+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 23:16:59 kafka | [2024-04-29 23:15:04,217] INFO [GroupCoordinator 1]: Preparing to rebalance group 085fa03c-d2d9-404c-b0e2-72bc2e06aca2 in state PreparingRebalance with old generation 0 (__consumer_offsets-19) (reason: Adding new member consumer-085fa03c-d2d9-404c-b0e2-72bc2e06aca2-2-0386bef5-4364-47bf-87f4-bb00f58168cb with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:37 23:16:59 policy-pap | [2024-04-29T23:15:52.474+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 23:16:59 kafka | [2024-04-29 23:15:06,778] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:37 23:16:59 policy-pap | [2024-04-29T23:15:54.010+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=7edbd1fb-1d5c-4d52-8493-26ac0c4382f1, expireMs=1714432554009] 23:16:59 kafka | [2024-04-29 23:15:06,793] INFO [GroupCoordinator 1]: Stabilized group 138f9fa3-ce1b-405c-9d22-e6763c020d7f generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:37 23:16:59 policy-pap | [2024-04-29T23:15:54.103+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=7b662de9-702e-4d3b-a521-88c62a87dc66, expireMs=1714432554103] 23:16:59 kafka | [2024-04-29 23:15:06,808] INFO [GroupCoordinator 1]: Assignment received from leader consumer-138f9fa3-ce1b-405c-9d22-e6763c020d7f-3-ed44512a-2f1c-4fa2-bc63-34a09c71ab3b for group 138f9fa3-ce1b-405c-9d22-e6763c020d7f for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:38 23:16:59 kafka | [2024-04-29 23:15:06,808] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-50560068-6271-4dcd-9a1a-dfae16161e40 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:38 23:16:59 kafka | [2024-04-29 23:15:07,217] INFO [GroupCoordinator 1]: Stabilized group 085fa03c-d2d9-404c-b0e2-72bc2e06aca2 generation 1 (__consumer_offsets-19) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:38 23:16:59 kafka | [2024-04-29 23:15:07,234] INFO [GroupCoordinator 1]: Assignment received from leader consumer-085fa03c-d2d9-404c-b0e2-72bc2e06aca2-2-0386bef5-4364-47bf-87f4-bb00f58168cb for group 085fa03c-d2d9-404c-b0e2-72bc2e06aca2 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:59 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:38 23:16:59 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:38 23:16:59 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2904242314340800u 1 2024-04-29 23:14:38 23:16:59 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2904242314340900u 1 2024-04-29 23:14:38 23:16:59 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2904242314340900u 1 2024-04-29 23:14:38 23:16:59 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2904242314340900u 1 2024-04-29 23:14:38 23:16:59 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2904242314340900u 1 2024-04-29 23:14:38 23:16:59 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2904242314340900u 1 2024-04-29 23:14:38 23:16:59 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2904242314340900u 1 2024-04-29 23:14:38 23:16:59 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2904242314340900u 1 2024-04-29 23:14:38 23:16:59 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2904242314340900u 1 2024-04-29 23:14:38 23:16:59 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2904242314340900u 1 2024-04-29 23:14:38 23:16:59 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2904242314340900u 1 2024-04-29 23:14:38 23:16:59 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2904242314340900u 1 2024-04-29 23:14:38 23:16:59 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2904242314340900u 1 2024-04-29 23:14:38 23:16:59 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2904242314340900u 1 2024-04-29 23:14:38 23:16:59 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2904242314341000u 1 2024-04-29 23:14:38 23:16:59 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2904242314341000u 1 2024-04-29 23:14:38 23:16:59 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2904242314341000u 1 2024-04-29 23:14:39 23:16:59 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2904242314341000u 1 2024-04-29 23:14:39 23:16:59 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2904242314341000u 1 2024-04-29 23:14:39 23:16:59 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2904242314341000u 1 2024-04-29 23:14:39 23:16:59 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2904242314341000u 1 2024-04-29 23:14:39 23:16:59 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2904242314341000u 1 2024-04-29 23:14:39 23:16:59 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2904242314341000u 1 2024-04-29 23:14:39 23:16:59 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2904242314341100u 1 2024-04-29 23:14:39 23:16:59 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2904242314341200u 1 2024-04-29 23:14:39 23:16:59 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2904242314341200u 1 2024-04-29 23:14:39 23:16:59 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2904242314341200u 1 2024-04-29 23:14:39 23:16:59 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2904242314341200u 1 2024-04-29 23:14:39 23:16:59 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2904242314341300u 1 2024-04-29 23:14:39 23:16:59 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2904242314341300u 1 2024-04-29 23:14:39 23:16:59 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2904242314341300u 1 2024-04-29 23:14:39 23:16:59 policy-db-migrator | policyadmin: OK @ 1300 23:16:59 ++ echo 'Tearing down containers...' 23:16:59 Tearing down containers... 23:16:59 ++ docker-compose down -v --remove-orphans 23:17:00 Stopping policy-apex-pdp ... 23:17:00 Stopping policy-pap ... 23:17:00 Stopping policy-api ... 23:17:00 Stopping grafana ... 23:17:00 Stopping kafka ... 23:17:00 Stopping prometheus ... 23:17:00 Stopping mariadb ... 23:17:00 Stopping zookeeper ... 23:17:00 Stopping simulator ... 23:17:00 Stopping grafana ... done 23:17:01 Stopping prometheus ... done 23:17:10 Stopping policy-apex-pdp ... done 23:17:20 Stopping policy-pap ... done 23:17:20 Stopping simulator ... done 23:17:21 Stopping mariadb ... done 23:17:21 Stopping kafka ... done 23:17:22 Stopping zookeeper ... done 23:17:31 Stopping policy-api ... done 23:17:31 Removing policy-apex-pdp ... 23:17:31 Removing policy-pap ... 23:17:31 Removing policy-api ... 23:17:31 Removing policy-db-migrator ... 23:17:31 Removing grafana ... 23:17:31 Removing kafka ... 23:17:31 Removing prometheus ... 23:17:31 Removing mariadb ... 23:17:31 Removing zookeeper ... 23:17:31 Removing simulator ... 23:17:31 Removing policy-apex-pdp ... done 23:17:31 Removing policy-db-migrator ... done 23:17:31 Removing prometheus ... done 23:17:31 Removing policy-api ... done 23:17:31 Removing simulator ... done 23:17:31 Removing zookeeper ... done 23:17:31 Removing grafana ... done 23:17:31 Removing kafka ... done 23:17:31 Removing policy-pap ... done 23:17:31 Removing mariadb ... done 23:17:31 Removing network compose_default 23:17:31 ++ cd /w/workspace/policy-pap-master-project-csit-pap 23:17:31 + load_set 23:17:31 + _setopts=hxB 23:17:31 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:17:31 ++ tr : ' ' 23:17:31 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:31 + set +o braceexpand 23:17:31 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:31 + set +o hashall 23:17:31 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:31 + set +o interactive-comments 23:17:31 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:31 + set +o xtrace 23:17:31 ++ echo hxB 23:17:31 ++ sed 's/./& /g' 23:17:31 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:31 + set +h 23:17:31 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:31 + set +x 23:17:31 + rsync /w/workspace/policy-pap-master-project-csit-pap/compose/docker_compose.log /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:17:31 + [[ -n /tmp/tmp.R5STfyAPO3 ]] 23:17:31 + rsync -av /tmp/tmp.R5STfyAPO3/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:17:31 sending incremental file list 23:17:31 ./ 23:17:31 log.html 23:17:31 output.xml 23:17:31 report.html 23:17:31 testplan.txt 23:17:31 23:17:31 sent 918,998 bytes received 95 bytes 1,838,186.00 bytes/sec 23:17:31 total size is 918,452 speedup is 1.00 23:17:31 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:17:31 + exit 0 23:17:31 $ ssh-agent -k 23:17:31 unset SSH_AUTH_SOCK; 23:17:31 unset SSH_AGENT_PID; 23:17:31 echo Agent pid 2079 killed; 23:17:31 [ssh-agent] Stopped. 23:17:31 Robot results publisher started... 23:17:31 INFO: Checking test criticality is deprecated and will be dropped in a future release! 23:17:31 -Parsing output xml: 23:17:32 Done! 23:17:32 WARNING! Could not find file: **/log.html 23:17:32 WARNING! Could not find file: **/report.html 23:17:32 -Copying log files to build dir: 23:17:32 Done! 23:17:32 -Assigning results to build: 23:17:32 Done! 23:17:32 -Checking thresholds: 23:17:32 Done! 23:17:32 Done publishing Robot results. 23:17:32 [PostBuildScript] - [INFO] Executing post build scripts. 23:17:32 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins9294738066481017559.sh 23:17:32 ---> sysstat.sh 23:17:32 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17031333591537196286.sh 23:17:32 ---> package-listing.sh 23:17:32 ++ facter osfamily 23:17:32 ++ tr '[:upper:]' '[:lower:]' 23:17:33 + OS_FAMILY=debian 23:17:33 + workspace=/w/workspace/policy-pap-master-project-csit-pap 23:17:33 + START_PACKAGES=/tmp/packages_start.txt 23:17:33 + END_PACKAGES=/tmp/packages_end.txt 23:17:33 + DIFF_PACKAGES=/tmp/packages_diff.txt 23:17:33 + PACKAGES=/tmp/packages_start.txt 23:17:33 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:33 + PACKAGES=/tmp/packages_end.txt 23:17:33 + case "${OS_FAMILY}" in 23:17:33 + dpkg -l 23:17:33 + grep '^ii' 23:17:33 + '[' -f /tmp/packages_start.txt ']' 23:17:33 + '[' -f /tmp/packages_end.txt ']' 23:17:33 + diff /tmp/packages_start.txt /tmp/packages_end.txt 23:17:33 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:33 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:33 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:33 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins5193550508407363774.sh 23:17:33 ---> capture-instance-metadata.sh 23:17:33 Setup pyenv: 23:17:33 system 23:17:33 3.8.13 23:17:33 3.9.13 23:17:33 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:33 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-KuFe from file:/tmp/.os_lf_venv 23:17:34 lf-activate-venv(): INFO: Installing: lftools 23:17:44 lf-activate-venv(): INFO: Adding /tmp/venv-KuFe/bin to PATH 23:17:44 INFO: Running in OpenStack, capturing instance metadata 23:17:44 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins11995280465393273302.sh 23:17:44 provisioning config files... 23:17:44 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config6068454524258685401tmp 23:17:44 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 23:17:44 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 23:17:44 [EnvInject] - Injecting environment variables from a build step. 23:17:44 [EnvInject] - Injecting as environment variables the properties content 23:17:44 SERVER_ID=logs 23:17:44 23:17:44 [EnvInject] - Variables injected successfully. 23:17:44 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins11212939120818672078.sh 23:17:44 ---> create-netrc.sh 23:17:44 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins16132457009350245664.sh 23:17:44 ---> python-tools-install.sh 23:17:44 Setup pyenv: 23:17:44 system 23:17:44 3.8.13 23:17:44 3.9.13 23:17:44 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:44 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-KuFe from file:/tmp/.os_lf_venv 23:17:46 lf-activate-venv(): INFO: Installing: lftools 23:17:54 lf-activate-venv(): INFO: Adding /tmp/venv-KuFe/bin to PATH 23:17:54 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins9921853616028115329.sh 23:17:54 ---> sudo-logs.sh 23:17:54 Archiving 'sudo' log.. 23:17:54 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins333953284375041318.sh 23:17:54 ---> job-cost.sh 23:17:54 Setup pyenv: 23:17:54 system 23:17:54 3.8.13 23:17:54 3.9.13 23:17:54 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:54 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-KuFe from file:/tmp/.os_lf_venv 23:17:55 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 23:18:00 lf-activate-venv(): INFO: Adding /tmp/venv-KuFe/bin to PATH 23:18:00 INFO: No Stack... 23:18:00 INFO: Retrieving Pricing Info for: v3-standard-8 23:18:00 INFO: Archiving Costs 23:18:00 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins9042639546946858120.sh 23:18:00 ---> logs-deploy.sh 23:18:00 Setup pyenv: 23:18:01 system 23:18:01 3.8.13 23:18:01 3.9.13 23:18:01 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:18:01 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-KuFe from file:/tmp/.os_lf_venv 23:18:02 lf-activate-venv(): INFO: Installing: lftools 23:18:10 lf-activate-venv(): INFO: Adding /tmp/venv-KuFe/bin to PATH 23:18:10 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1670 23:18:10 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 23:18:11 Archives upload complete. 23:18:12 INFO: archiving logs to Nexus 23:18:12 ---> uname -a: 23:18:12 Linux prd-ubuntu1804-docker-8c-8g-36303 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 23:18:12 23:18:12 23:18:12 ---> lscpu: 23:18:12 Architecture: x86_64 23:18:12 CPU op-mode(s): 32-bit, 64-bit 23:18:12 Byte Order: Little Endian 23:18:12 CPU(s): 8 23:18:12 On-line CPU(s) list: 0-7 23:18:12 Thread(s) per core: 1 23:18:12 Core(s) per socket: 1 23:18:12 Socket(s): 8 23:18:12 NUMA node(s): 1 23:18:12 Vendor ID: AuthenticAMD 23:18:12 CPU family: 23 23:18:12 Model: 49 23:18:12 Model name: AMD EPYC-Rome Processor 23:18:12 Stepping: 0 23:18:12 CPU MHz: 2799.998 23:18:12 BogoMIPS: 5599.99 23:18:12 Virtualization: AMD-V 23:18:12 Hypervisor vendor: KVM 23:18:12 Virtualization type: full 23:18:12 L1d cache: 32K 23:18:12 L1i cache: 32K 23:18:12 L2 cache: 512K 23:18:12 L3 cache: 16384K 23:18:12 NUMA node0 CPU(s): 0-7 23:18:12 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 23:18:12 23:18:12 23:18:12 ---> nproc: 23:18:12 8 23:18:12 23:18:12 23:18:12 ---> df -h: 23:18:12 Filesystem Size Used Avail Use% Mounted on 23:18:12 udev 16G 0 16G 0% /dev 23:18:12 tmpfs 3.2G 708K 3.2G 1% /run 23:18:12 /dev/vda1 155G 14G 142G 9% / 23:18:12 tmpfs 16G 0 16G 0% /dev/shm 23:18:12 tmpfs 5.0M 0 5.0M 0% /run/lock 23:18:12 tmpfs 16G 0 16G 0% /sys/fs/cgroup 23:18:12 /dev/vda15 105M 4.4M 100M 5% /boot/efi 23:18:12 tmpfs 3.2G 0 3.2G 0% /run/user/1001 23:18:12 23:18:12 23:18:12 ---> free -m: 23:18:12 total used free shared buff/cache available 23:18:12 Mem: 32167 847 25376 0 5943 30864 23:18:12 Swap: 1023 0 1023 23:18:12 23:18:12 23:18:12 ---> ip addr: 23:18:12 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 23:18:12 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 23:18:12 inet 127.0.0.1/8 scope host lo 23:18:12 valid_lft forever preferred_lft forever 23:18:12 inet6 ::1/128 scope host 23:18:12 valid_lft forever preferred_lft forever 23:18:12 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 23:18:12 link/ether fa:16:3e:af:21:38 brd ff:ff:ff:ff:ff:ff 23:18:12 inet 10.30.106.111/23 brd 10.30.107.255 scope global dynamic ens3 23:18:12 valid_lft 85929sec preferred_lft 85929sec 23:18:12 inet6 fe80::f816:3eff:feaf:2138/64 scope link 23:18:12 valid_lft forever preferred_lft forever 23:18:12 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 23:18:12 link/ether 02:42:cc:00:5c:05 brd ff:ff:ff:ff:ff:ff 23:18:12 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 23:18:12 valid_lft forever preferred_lft forever 23:18:12 23:18:12 23:18:12 ---> sar -b -r -n DEV: 23:18:12 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-36303) 04/29/24 _x86_64_ (8 CPU) 23:18:12 23:18:12 23:10:24 LINUX RESTART (8 CPU) 23:18:12 23:18:12 23:11:02 tps rtps wtps bread/s bwrtn/s 23:18:12 23:12:01 129.68 31.45 98.24 1580.35 62078.35 23:18:12 23:13:01 145.79 19.76 126.03 2299.88 67529.15 23:18:12 23:14:01 164.26 3.42 160.84 475.65 73771.17 23:18:12 23:15:01 390.17 11.76 378.40 801.57 65580.10 23:18:12 23:16:01 16.76 0.48 16.28 22.80 417.26 23:18:12 23:17:01 11.03 0.12 10.91 14.80 1099.10 23:18:12 23:18:01 62.45 1.98 60.46 112.76 2565.16 23:18:12 Average: 131.45 9.80 121.65 756.30 38950.38 23:18:12 23:18:12 23:11:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:18:12 23:12:01 30152584 31720480 2786636 8.46 68148 1810616 1427100 4.20 848640 1645428 139440 23:18:12 23:13:01 29810348 31741952 3128872 9.50 89624 2135604 1356232 3.99 875868 1924440 135908 23:18:12 23:14:01 26748752 31671440 6190468 18.79 134596 4954388 1387016 4.08 1008792 4691360 932216 23:18:12 23:15:01 24053932 29822032 8885288 26.97 153976 5732660 8323940 24.49 3026960 5266252 512 23:18:12 23:16:01 23841100 29615488 9098120 27.62 155628 5735000 8677088 25.53 3249368 5249092 436 23:18:12 23:17:01 23874768 29675660 9064452 27.52 156068 5762888 7991396 23.51 3206832 5262796 264 23:18:12 23:18:01 26017976 31636908 6921244 21.01 157912 5595372 1497940 4.41 1278412 5107396 2340 23:18:12 Average: 26357066 30840566 6582154 19.98 130850 4532361 4380102 12.89 1927839 4163823 173017 23:18:12 23:18:12 23:11:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:18:12 23:12:01 ens3 51.05 36.89 833.91 6.18 0.00 0.00 0.00 0.00 23:18:12 23:12:01 lo 1.42 1.42 0.16 0.16 0.00 0.00 0.00 0.00 23:18:12 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:12 23:13:01 ens3 59.52 46.91 831.71 8.79 0.00 0.00 0.00 0.00 23:18:12 23:13:01 lo 1.93 1.93 0.20 0.20 0.00 0.00 0.00 0.00 23:18:12 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:12 23:14:01 ens3 1045.63 498.58 22404.09 36.20 0.00 0.00 0.00 0.00 23:18:12 23:14:01 br-ad151427f248 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:12 23:14:01 lo 10.86 10.86 1.05 1.05 0.00 0.00 0.00 0.00 23:18:12 23:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:12 23:15:01 veth4e1b604 1.35 2.08 0.15 0.19 0.00 0.00 0.00 0.00 23:18:12 23:15:01 ens3 192.62 97.45 6974.29 7.20 0.00 0.00 0.00 0.00 23:18:12 23:15:01 veth0142243 0.78 0.95 0.05 0.05 0.00 0.00 0.00 0.00 23:18:12 23:15:01 br-ad151427f248 0.80 0.70 0.06 0.31 0.00 0.00 0.00 0.00 23:18:12 23:16:01 veth4e1b604 3.73 4.60 0.66 0.74 0.00 0.00 0.00 0.00 23:18:12 23:16:01 ens3 5.35 4.17 1.13 1.62 0.00 0.00 0.00 0.00 23:18:12 23:16:01 veth0142243 4.12 5.37 0.82 0.53 0.00 0.00 0.00 0.00 23:18:12 23:16:01 br-ad151427f248 2.20 2.53 1.82 1.74 0.00 0.00 0.00 0.00 23:18:12 23:17:01 veth4e1b604 0.17 0.37 0.01 0.03 0.00 0.00 0.00 0.00 23:18:12 23:17:01 ens3 14.35 13.56 6.24 16.12 0.00 0.00 0.00 0.00 23:18:12 23:17:01 veth0142243 3.22 4.73 0.66 0.36 0.00 0.00 0.00 0.00 23:18:12 23:17:01 br-ad151427f248 1.28 1.52 0.10 0.14 0.00 0.00 0.00 0.00 23:18:12 23:18:01 ens3 43.64 36.82 69.02 17.22 0.00 0.00 0.00 0.00 23:18:12 23:18:01 lo 34.66 34.66 6.19 6.19 0.00 0.00 0.00 0.00 23:18:12 23:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:12 Average: ens3 202.09 105.07 4454.20 13.35 0.00 0.00 0.00 0.00 23:18:12 Average: lo 4.40 4.40 0.84 0.84 0.00 0.00 0.00 0.00 23:18:12 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:12 23:18:12 23:18:12 ---> sar -P ALL: 23:18:12 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-36303) 04/29/24 _x86_64_ (8 CPU) 23:18:12 23:18:12 23:10:24 LINUX RESTART (8 CPU) 23:18:12 23:18:12 23:11:02 CPU %user %nice %system %iowait %steal %idle 23:18:12 23:12:01 all 8.97 0.00 0.68 6.69 0.04 83.63 23:18:12 23:12:01 0 9.50 0.00 0.53 0.10 0.05 89.82 23:18:12 23:12:01 1 2.25 0.00 0.27 0.12 0.03 97.32 23:18:12 23:12:01 2 6.46 0.00 0.34 0.53 0.02 92.65 23:18:12 23:12:01 3 1.46 0.00 0.39 10.32 0.02 87.82 23:18:12 23:12:01 4 2.92 0.00 0.31 1.26 0.07 95.44 23:18:12 23:12:01 5 30.38 0.00 1.85 7.29 0.07 60.41 23:18:12 23:12:01 6 13.83 0.00 1.28 4.46 0.03 80.40 23:18:12 23:12:01 7 5.11 0.00 0.46 29.50 0.03 64.89 23:18:12 23:13:01 all 9.38 0.00 0.75 6.43 0.04 83.40 23:18:12 23:13:01 0 1.79 0.00 0.45 0.05 0.03 97.68 23:18:12 23:13:01 1 2.63 0.00 0.22 0.15 0.00 97.00 23:18:12 23:13:01 2 0.23 0.00 0.08 0.13 0.00 99.55 23:18:12 23:13:01 3 21.38 0.00 1.12 29.03 0.05 48.42 23:18:12 23:13:01 4 7.26 0.00 0.72 16.84 0.07 75.11 23:18:12 23:13:01 5 9.07 0.00 0.90 0.50 0.05 89.48 23:18:12 23:13:01 6 24.55 0.00 1.64 3.10 0.05 70.66 23:18:12 23:13:01 7 8.23 0.00 0.80 1.72 0.03 89.22 23:18:12 23:14:01 all 11.11 0.00 4.79 6.22 0.07 77.81 23:18:12 23:14:01 0 11.78 0.00 5.48 0.10 0.08 82.55 23:18:12 23:14:01 1 9.23 0.00 4.96 29.21 0.12 56.49 23:18:12 23:14:01 2 12.27 0.00 3.66 8.61 0.07 75.40 23:18:12 23:14:01 3 9.44 0.00 3.39 6.96 0.05 80.16 23:18:12 23:14:01 4 11.76 0.00 4.70 1.54 0.07 81.93 23:18:12 23:14:01 5 11.58 0.00 5.44 0.29 0.07 82.63 23:18:12 23:14:01 6 10.31 0.00 5.68 0.73 0.07 83.21 23:18:12 23:14:01 7 12.49 0.00 4.98 2.45 0.08 79.99 23:18:12 23:15:01 all 23.83 0.00 3.85 4.82 0.07 67.44 23:18:12 23:15:01 0 23.11 0.00 4.19 16.91 0.07 55.72 23:18:12 23:15:01 1 21.09 0.00 3.98 7.09 0.08 67.75 23:18:12 23:15:01 2 30.15 0.00 4.65 1.87 0.07 63.27 23:18:12 23:15:01 3 24.46 0.00 3.26 4.27 0.07 67.95 23:18:12 23:15:01 4 22.35 0.00 3.84 0.82 0.08 72.91 23:18:12 23:15:01 5 23.17 0.00 3.51 0.96 0.05 72.32 23:18:12 23:15:01 6 22.69 0.00 4.49 4.49 0.07 68.25 23:18:12 23:15:01 7 23.62 0.00 2.92 2.08 0.07 71.32 23:18:12 23:16:01 all 8.96 0.00 0.79 0.05 0.06 90.14 23:18:12 23:16:01 0 10.02 0.00 0.98 0.02 0.05 88.93 23:18:12 23:16:01 1 8.03 0.00 0.82 0.08 0.05 91.02 23:18:12 23:16:01 2 9.75 0.00 0.73 0.02 0.05 89.44 23:18:12 23:16:01 3 8.26 0.00 0.75 0.02 0.03 90.94 23:18:12 23:16:01 4 8.86 0.00 0.84 0.02 0.07 90.22 23:18:12 23:16:01 5 8.06 0.00 0.64 0.02 0.08 91.21 23:18:12 23:16:01 6 9.79 0.00 0.70 0.02 0.05 89.44 23:18:12 23:16:01 7 8.98 0.00 0.82 0.27 0.07 89.87 23:18:12 23:17:01 all 1.34 0.00 0.33 0.09 0.04 98.20 23:18:12 23:17:01 0 1.23 0.00 0.30 0.40 0.02 98.05 23:18:12 23:17:01 1 1.80 0.00 0.35 0.12 0.08 97.64 23:18:12 23:17:01 2 1.02 0.00 0.32 0.12 0.05 98.50 23:18:12 23:17:01 3 2.33 0.00 0.42 0.00 0.03 97.22 23:18:12 23:17:01 4 1.07 0.00 0.33 0.02 0.05 98.53 23:18:12 23:17:01 5 0.80 0.00 0.30 0.02 0.02 98.87 23:18:12 23:17:01 6 1.40 0.00 0.23 0.00 0.05 98.31 23:18:12 23:17:01 7 1.05 0.00 0.37 0.00 0.03 98.55 23:18:12 23:18:01 all 6.08 0.00 0.66 0.30 0.03 92.93 23:18:12 23:18:01 0 3.72 0.00 0.63 0.07 0.02 95.56 23:18:12 23:18:01 1 0.72 0.00 0.48 0.58 0.03 98.18 23:18:12 23:18:01 2 7.63 0.00 0.67 0.28 0.03 91.39 23:18:12 23:18:01 3 16.69 0.00 0.69 0.22 0.05 82.35 23:18:12 23:18:01 4 1.05 0.00 0.60 0.12 0.02 98.21 23:18:12 23:18:01 5 14.53 0.00 0.98 1.10 0.05 83.33 23:18:12 23:18:01 6 3.30 0.00 0.58 0.03 0.02 96.06 23:18:12 23:18:01 7 1.00 0.00 0.63 0.05 0.02 98.30 23:18:12 Average: all 9.94 0.00 1.68 3.50 0.05 84.83 23:18:12 Average: 0 8.71 0.00 1.79 2.52 0.05 86.94 23:18:12 Average: 1 6.54 0.00 1.58 5.28 0.06 86.55 23:18:12 Average: 2 9.63 0.00 1.49 1.64 0.04 87.20 23:18:12 Average: 3 12.02 0.00 1.43 7.25 0.04 79.26 23:18:12 Average: 4 7.89 0.00 1.62 2.95 0.06 87.48 23:18:12 Average: 5 13.88 0.00 1.94 1.44 0.06 82.69 23:18:12 Average: 6 12.25 0.00 2.08 1.82 0.05 83.80 23:18:12 Average: 7 8.61 0.00 1.56 5.10 0.05 84.67 23:18:12 23:18:12 23:18:12