13:50:50 Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/137060 13:50:50 Running as SYSTEM 13:50:50 [EnvInject] - Loading node environment variables. 13:50:50 Building remotely on prd-ubuntu1804-docker-8c-8g-14213 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-verify-pap 13:50:50 [ssh-agent] Looking for ssh-agent implementation... 13:50:51 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 13:50:51 $ ssh-agent 13:50:51 SSH_AUTH_SOCK=/tmp/ssh-Yi0MVoirl3A6/agent.2140 13:50:51 SSH_AGENT_PID=2142 13:50:51 [ssh-agent] Started. 13:50:51 Running ssh-add (command line suppressed) 13:50:51 Identity added: /w/workspace/policy-pap-master-project-csit-verify-pap@tmp/private_key_3550345780838948553.key (/w/workspace/policy-pap-master-project-csit-verify-pap@tmp/private_key_3550345780838948553.key) 13:50:51 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 13:50:51 The recommended git tool is: NONE 13:50:53 using credential onap-jenkins-ssh 13:50:53 Wiping out workspace first. 13:50:53 Cloning the remote Git repository 13:50:53 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 13:50:53 > git init /w/workspace/policy-pap-master-project-csit-verify-pap # timeout=10 13:50:53 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 13:50:53 > git --version # timeout=10 13:50:53 > git --version # 'git version 2.17.1' 13:50:53 using GIT_SSH to set credentials Gerrit user 13:50:53 Verifying host key using manually-configured host key entries 13:50:53 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 13:50:53 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 13:50:53 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 13:50:54 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 13:50:54 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 13:50:54 using GIT_SSH to set credentials Gerrit user 13:50:54 Verifying host key using manually-configured host key entries 13:50:54 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git refs/changes/60/137060/2 # timeout=30 13:50:54 > git rev-parse b398b692983f0c3a8cd19dd7c46b3af8d1a0a146^{commit} # timeout=10 13:50:54 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script 13:50:54 Checking out Revision b398b692983f0c3a8cd19dd7c46b3af8d1a0a146 (refs/changes/60/137060/2) 13:50:54 > git config core.sparsecheckout # timeout=10 13:50:54 > git checkout -f b398b692983f0c3a8cd19dd7c46b3af8d1a0a146 # timeout=30 13:50:54 Commit message: "Add kafka support in K8s CSIT" 13:50:54 > git rev-parse FETCH_HEAD^{commit} # timeout=10 13:50:55 > git rev-list --no-walk caa7adc30ed054d2a5cfea4a1b9a265d5cfb6785 # timeout=10 13:50:55 provisioning config files... 13:50:55 copy managed file [npmrc] to file:/home/jenkins/.npmrc 13:50:55 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 13:50:55 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins14080630004799768180.sh 13:50:55 ---> python-tools-install.sh 13:50:55 Setup pyenv: 13:50:55 * system (set by /opt/pyenv/version) 13:50:55 * 3.8.13 (set by /opt/pyenv/version) 13:50:55 * 3.9.13 (set by /opt/pyenv/version) 13:50:55 * 3.10.6 (set by /opt/pyenv/version) 13:51:00 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-l0SW 13:51:00 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 13:51:05 lf-activate-venv(): INFO: Installing: lftools 13:51:49 lf-activate-venv(): INFO: Adding /tmp/venv-l0SW/bin to PATH 13:51:49 Generating Requirements File 13:52:27 Python 3.10.6 13:52:27 pip 23.3.2 from /tmp/venv-l0SW/lib/python3.10/site-packages/pip (python 3.10) 13:52:28 appdirs==1.4.4 13:52:28 argcomplete==3.2.1 13:52:28 aspy.yaml==1.3.0 13:52:28 attrs==23.2.0 13:52:28 autopage==0.5.2 13:52:28 beautifulsoup4==4.12.3 13:52:28 boto3==1.34.23 13:52:28 botocore==1.34.23 13:52:28 bs4==0.0.2 13:52:28 cachetools==5.3.2 13:52:28 certifi==2023.11.17 13:52:28 cffi==1.16.0 13:52:28 cfgv==3.4.0 13:52:28 chardet==5.2.0 13:52:28 charset-normalizer==3.3.2 13:52:28 click==8.1.7 13:52:28 cliff==4.5.0 13:52:28 cmd2==2.4.3 13:52:28 cryptography==3.3.2 13:52:28 debtcollector==2.5.0 13:52:28 decorator==5.1.1 13:52:28 defusedxml==0.7.1 13:52:28 Deprecated==1.2.14 13:52:28 distlib==0.3.8 13:52:28 dnspython==2.5.0 13:52:28 docker==4.2.2 13:52:28 dogpile.cache==1.3.0 13:52:28 email-validator==2.1.0.post1 13:52:28 filelock==3.13.1 13:52:28 future==0.18.3 13:52:28 gitdb==4.0.11 13:52:28 GitPython==3.1.41 13:52:28 google-auth==2.26.2 13:52:28 httplib2==0.22.0 13:52:28 identify==2.5.33 13:52:28 idna==3.6 13:52:28 importlib-resources==1.5.0 13:52:28 iso8601==2.1.0 13:52:28 Jinja2==3.1.3 13:52:28 jmespath==1.0.1 13:52:28 jsonpatch==1.33 13:52:28 jsonpointer==2.4 13:52:28 jsonschema==4.21.1 13:52:28 jsonschema-specifications==2023.12.1 13:52:28 keystoneauth1==5.5.0 13:52:28 kubernetes==29.0.0 13:52:28 lftools==0.37.8 13:52:28 lxml==5.1.0 13:52:28 MarkupSafe==2.1.4 13:52:28 msgpack==1.0.7 13:52:28 multi_key_dict==2.0.3 13:52:28 munch==4.0.0 13:52:28 netaddr==0.10.1 13:52:28 netifaces==0.11.0 13:52:28 niet==1.4.2 13:52:28 nodeenv==1.8.0 13:52:28 oauth2client==4.1.3 13:52:28 oauthlib==3.2.2 13:52:28 openstacksdk==0.62.0 13:52:28 os-client-config==2.1.0 13:52:28 os-service-types==1.7.0 13:52:28 osc-lib==3.0.0 13:52:28 oslo.config==9.3.0 13:52:28 oslo.context==5.3.0 13:52:28 oslo.i18n==6.2.0 13:52:28 oslo.log==5.4.0 13:52:28 oslo.serialization==5.3.0 13:52:28 oslo.utils==7.0.0 13:52:28 packaging==23.2 13:52:28 pbr==6.0.0 13:52:28 platformdirs==4.1.0 13:52:28 prettytable==3.9.0 13:52:28 pyasn1==0.5.1 13:52:28 pyasn1-modules==0.3.0 13:52:28 pycparser==2.21 13:52:28 pygerrit2==2.0.15 13:52:28 PyGithub==2.1.1 13:52:28 pyinotify==0.9.6 13:52:28 PyJWT==2.8.0 13:52:28 PyNaCl==1.5.0 13:52:28 pyparsing==2.4.7 13:52:28 pyperclip==1.8.2 13:52:28 pyrsistent==0.20.0 13:52:28 python-cinderclient==9.4.0 13:52:28 python-dateutil==2.8.2 13:52:28 python-heatclient==3.4.0 13:52:28 python-jenkins==1.8.2 13:52:28 python-keystoneclient==5.3.0 13:52:28 python-magnumclient==4.3.0 13:52:28 python-novaclient==18.4.0 13:52:28 python-openstackclient==6.0.0 13:52:28 python-swiftclient==4.4.0 13:52:28 pytz==2023.3.post1 13:52:28 PyYAML==6.0.1 13:52:28 referencing==0.32.1 13:52:28 requests==2.31.0 13:52:28 requests-oauthlib==1.3.1 13:52:28 requestsexceptions==1.4.0 13:52:28 rfc3986==2.0.0 13:52:28 rpds-py==0.17.1 13:52:28 rsa==4.9 13:52:28 ruamel.yaml==0.18.5 13:52:28 ruamel.yaml.clib==0.2.8 13:52:28 s3transfer==0.10.0 13:52:28 simplejson==3.19.2 13:52:28 six==1.16.0 13:52:28 smmap==5.0.1 13:52:28 soupsieve==2.5 13:52:28 stevedore==5.1.0 13:52:28 tabulate==0.9.0 13:52:28 toml==0.10.2 13:52:28 tomlkit==0.12.3 13:52:28 tqdm==4.66.1 13:52:28 typing_extensions==4.9.0 13:52:28 tzdata==2023.4 13:52:28 urllib3==1.26.18 13:52:28 virtualenv==20.25.0 13:52:28 wcwidth==0.2.13 13:52:28 websocket-client==1.7.0 13:52:28 wrapt==1.16.0 13:52:28 xdg==6.0.0 13:52:28 xmltodict==0.13.0 13:52:28 yq==3.2.3 13:52:28 [EnvInject] - Injecting environment variables from a build step. 13:52:28 [EnvInject] - Injecting as environment variables the properties content 13:52:28 SET_JDK_VERSION=openjdk17 13:52:28 GIT_URL="git://cloud.onap.org/mirror" 13:52:28 13:52:28 [EnvInject] - Variables injected successfully. 13:52:28 [policy-pap-master-project-csit-verify-pap] $ /bin/sh /tmp/jenkins4579151614360948690.sh 13:52:28 ---> update-java-alternatives.sh 13:52:28 ---> Updating Java version 13:52:28 ---> Ubuntu/Debian system detected 13:52:28 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 13:52:28 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 13:52:28 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 13:52:28 openjdk version "17.0.4" 2022-07-19 13:52:28 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 13:52:28 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 13:52:28 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 13:52:28 [EnvInject] - Injecting environment variables from a build step. 13:52:28 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 13:52:28 [EnvInject] - Variables injected successfully. 13:52:28 [policy-pap-master-project-csit-verify-pap] $ /bin/sh -xe /tmp/jenkins10776437951953663126.sh 13:52:28 + /w/workspace/policy-pap-master-project-csit-verify-pap/csit/run-project-csit.sh pap 13:52:28 + set +u 13:52:28 + save_set 13:52:28 + RUN_CSIT_SAVE_SET=ehxB 13:52:28 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 13:52:28 + '[' 1 -eq 0 ']' 13:52:28 + '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' 13:52:28 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin 13:52:28 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin 13:52:28 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts 13:52:28 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts 13:52:28 + export ROBOT_VARIABLES= 13:52:28 + ROBOT_VARIABLES= 13:52:28 + export PROJECT=pap 13:52:28 + PROJECT=pap 13:52:28 + cd /w/workspace/policy-pap-master-project-csit-verify-pap 13:52:28 + rm -rf /w/workspace/policy-pap-master-project-csit-verify-pap/csit/archives/pap 13:52:28 + mkdir -p /w/workspace/policy-pap-master-project-csit-verify-pap/csit/archives/pap 13:52:28 + source_safely /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/prepare-robot-env.sh 13:52:28 + '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/prepare-robot-env.sh ']' 13:52:28 + relax_set 13:52:28 + set +e 13:52:28 + set +o pipefail 13:52:28 + . /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/prepare-robot-env.sh 13:52:28 ++ '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' 13:52:28 +++ mktemp -d 13:52:28 ++ ROBOT_VENV=/tmp/tmp.glEjOpyI3A 13:52:28 ++ echo ROBOT_VENV=/tmp/tmp.glEjOpyI3A 13:52:28 +++ python3 --version 13:52:28 ++ echo 'Python version is: Python 3.6.9' 13:52:28 Python version is: Python 3.6.9 13:52:28 ++ python3 -m venv --clear /tmp/tmp.glEjOpyI3A 13:52:30 ++ source /tmp/tmp.glEjOpyI3A/bin/activate 13:52:30 +++ deactivate nondestructive 13:52:30 +++ '[' -n '' ']' 13:52:30 +++ '[' -n '' ']' 13:52:30 +++ '[' -n /bin/bash -o -n '' ']' 13:52:30 +++ hash -r 13:52:30 +++ '[' -n '' ']' 13:52:30 +++ unset VIRTUAL_ENV 13:52:30 +++ '[' '!' nondestructive = nondestructive ']' 13:52:30 +++ VIRTUAL_ENV=/tmp/tmp.glEjOpyI3A 13:52:30 +++ export VIRTUAL_ENV 13:52:30 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin 13:52:30 +++ PATH=/tmp/tmp.glEjOpyI3A/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin 13:52:30 +++ export PATH 13:52:30 +++ '[' -n '' ']' 13:52:30 +++ '[' -z '' ']' 13:52:30 +++ _OLD_VIRTUAL_PS1= 13:52:30 +++ '[' 'x(tmp.glEjOpyI3A) ' '!=' x ']' 13:52:30 +++ PS1='(tmp.glEjOpyI3A) ' 13:52:30 +++ export PS1 13:52:30 +++ '[' -n /bin/bash -o -n '' ']' 13:52:30 +++ hash -r 13:52:30 ++ set -exu 13:52:30 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 13:52:33 ++ echo 'Installing Python Requirements' 13:52:33 Installing Python Requirements 13:52:33 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/pylibs.txt 13:52:51 ++ python3 -m pip -qq freeze 13:52:52 bcrypt==4.0.1 13:52:52 beautifulsoup4==4.12.3 13:52:52 bitarray==2.9.2 13:52:52 certifi==2023.11.17 13:52:52 cffi==1.15.1 13:52:52 charset-normalizer==2.0.12 13:52:52 cryptography==40.0.2 13:52:52 decorator==5.1.1 13:52:52 elasticsearch==7.17.9 13:52:52 elasticsearch-dsl==7.4.1 13:52:52 enum34==1.1.10 13:52:52 idna==3.6 13:52:52 importlib-resources==5.4.0 13:52:52 ipaddr==2.2.0 13:52:52 isodate==0.6.1 13:52:52 jmespath==0.10.0 13:52:52 jsonpatch==1.32 13:52:52 jsonpath-rw==1.4.0 13:52:52 jsonpointer==2.3 13:52:52 lxml==5.1.0 13:52:52 netaddr==0.8.0 13:52:52 netifaces==0.11.0 13:52:52 odltools==0.1.28 13:52:52 paramiko==3.4.0 13:52:52 pkg_resources==0.0.0 13:52:52 ply==3.11 13:52:52 pyang==2.6.0 13:52:52 pyangbind==0.8.1 13:52:52 pycparser==2.21 13:52:52 pyhocon==0.3.60 13:52:52 PyNaCl==1.5.0 13:52:52 pyparsing==3.1.1 13:52:52 python-dateutil==2.8.2 13:52:52 regex==2023.8.8 13:52:52 requests==2.27.1 13:52:52 robotframework==6.1.1 13:52:52 robotframework-httplibrary==0.4.2 13:52:52 robotframework-pythonlibcore==3.0.0 13:52:52 robotframework-requests==0.9.4 13:52:52 robotframework-selenium2library==3.0.0 13:52:52 robotframework-seleniumlibrary==5.1.3 13:52:52 robotframework-sshlibrary==3.8.0 13:52:52 scapy==2.5.0 13:52:52 scp==0.14.5 13:52:52 selenium==3.141.0 13:52:52 six==1.16.0 13:52:52 soupsieve==2.3.2.post1 13:52:52 urllib3==1.26.18 13:52:52 waitress==2.0.0 13:52:52 WebOb==1.8.7 13:52:52 WebTest==3.0.0 13:52:52 zipp==3.6.0 13:52:52 ++ mkdir -p /tmp/tmp.glEjOpyI3A/src/onap 13:52:52 ++ rm -rf /tmp/tmp.glEjOpyI3A/src/onap/testsuite 13:52:52 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 13:52:58 ++ echo 'Installing python confluent-kafka library' 13:52:58 Installing python confluent-kafka library 13:52:58 ++ python3 -m pip install -qq confluent-kafka 13:52:59 ++ echo 'Uninstall docker-py and reinstall docker.' 13:52:59 Uninstall docker-py and reinstall docker. 13:52:59 ++ python3 -m pip uninstall -y -qq docker 13:53:00 ++ python3 -m pip install -U -qq docker 13:53:01 ++ python3 -m pip -qq freeze 13:53:02 bcrypt==4.0.1 13:53:02 beautifulsoup4==4.12.3 13:53:02 bitarray==2.9.2 13:53:02 certifi==2023.11.17 13:53:02 cffi==1.15.1 13:53:02 charset-normalizer==2.0.12 13:53:02 confluent-kafka==2.3.0 13:53:02 cryptography==40.0.2 13:53:02 decorator==5.1.1 13:53:02 deepdiff==5.7.0 13:53:02 dnspython==2.2.1 13:53:02 docker==5.0.3 13:53:02 elasticsearch==7.17.9 13:53:02 elasticsearch-dsl==7.4.1 13:53:02 enum34==1.1.10 13:53:02 future==0.18.3 13:53:02 idna==3.6 13:53:02 importlib-resources==5.4.0 13:53:02 ipaddr==2.2.0 13:53:02 isodate==0.6.1 13:53:02 Jinja2==3.0.3 13:53:02 jmespath==0.10.0 13:53:02 jsonpatch==1.32 13:53:02 jsonpath-rw==1.4.0 13:53:02 jsonpointer==2.3 13:53:02 kafka-python==2.0.2 13:53:02 lxml==5.1.0 13:53:02 MarkupSafe==2.0.1 13:53:02 more-itertools==5.0.0 13:53:02 netaddr==0.8.0 13:53:02 netifaces==0.11.0 13:53:02 odltools==0.1.28 13:53:02 ordered-set==4.0.2 13:53:02 paramiko==3.4.0 13:53:02 pbr==6.0.0 13:53:02 pkg_resources==0.0.0 13:53:02 ply==3.11 13:53:02 protobuf==3.19.6 13:53:02 pyang==2.6.0 13:53:02 pyangbind==0.8.1 13:53:02 pycparser==2.21 13:53:02 pyhocon==0.3.60 13:53:02 PyNaCl==1.5.0 13:53:02 pyparsing==3.1.1 13:53:02 python-dateutil==2.8.2 13:53:02 PyYAML==6.0.1 13:53:02 regex==2023.8.8 13:53:02 requests==2.27.1 13:53:02 robotframework==6.1.1 13:53:02 robotframework-httplibrary==0.4.2 13:53:02 robotframework-onap==0.6.0.dev105 13:53:02 robotframework-pythonlibcore==3.0.0 13:53:02 robotframework-requests==0.9.4 13:53:02 robotframework-selenium2library==3.0.0 13:53:02 robotframework-seleniumlibrary==5.1.3 13:53:02 robotframework-sshlibrary==3.8.0 13:53:02 robotlibcore-temp==1.0.2 13:53:02 scapy==2.5.0 13:53:02 scp==0.14.5 13:53:02 selenium==3.141.0 13:53:02 six==1.16.0 13:53:02 soupsieve==2.3.2.post1 13:53:02 urllib3==1.26.18 13:53:02 waitress==2.0.0 13:53:02 WebOb==1.8.7 13:53:02 websocket-client==1.3.1 13:53:02 WebTest==3.0.0 13:53:02 zipp==3.6.0 13:53:02 ++ uname 13:53:02 ++ grep -q Linux 13:53:02 ++ sudo apt-get -y -qq install libxml2-utils 13:53:02 + load_set 13:53:02 + _setopts=ehuxB 13:53:02 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 13:53:02 ++ tr : ' ' 13:53:02 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:53:02 + set +o braceexpand 13:53:02 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:53:02 + set +o hashall 13:53:02 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:53:02 + set +o interactive-comments 13:53:02 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:53:02 + set +o nounset 13:53:02 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:53:02 + set +o xtrace 13:53:02 ++ echo ehuxB 13:53:02 ++ sed 's/./& /g' 13:53:02 + for i in $(echo "$_setopts" | sed 's/./& /g') 13:53:02 + set +e 13:53:02 + for i in $(echo "$_setopts" | sed 's/./& /g') 13:53:02 + set +h 13:53:02 + for i in $(echo "$_setopts" | sed 's/./& /g') 13:53:02 + set +u 13:53:02 + for i in $(echo "$_setopts" | sed 's/./& /g') 13:53:02 + set +x 13:53:02 + source_safely /tmp/tmp.glEjOpyI3A/bin/activate 13:53:02 + '[' -z /tmp/tmp.glEjOpyI3A/bin/activate ']' 13:53:02 + relax_set 13:53:02 + set +e 13:53:02 + set +o pipefail 13:53:02 + . /tmp/tmp.glEjOpyI3A/bin/activate 13:53:02 ++ deactivate nondestructive 13:53:02 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin ']' 13:53:02 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin 13:53:02 ++ export PATH 13:53:02 ++ unset _OLD_VIRTUAL_PATH 13:53:02 ++ '[' -n '' ']' 13:53:02 ++ '[' -n /bin/bash -o -n '' ']' 13:53:02 ++ hash -r 13:53:02 ++ '[' -n '' ']' 13:53:02 ++ unset VIRTUAL_ENV 13:53:02 ++ '[' '!' nondestructive = nondestructive ']' 13:53:02 ++ VIRTUAL_ENV=/tmp/tmp.glEjOpyI3A 13:53:02 ++ export VIRTUAL_ENV 13:53:02 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin 13:53:02 ++ PATH=/tmp/tmp.glEjOpyI3A/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin 13:53:02 ++ export PATH 13:53:02 ++ '[' -n '' ']' 13:53:02 ++ '[' -z '' ']' 13:53:02 ++ _OLD_VIRTUAL_PS1='(tmp.glEjOpyI3A) ' 13:53:02 ++ '[' 'x(tmp.glEjOpyI3A) ' '!=' x ']' 13:53:02 ++ PS1='(tmp.glEjOpyI3A) (tmp.glEjOpyI3A) ' 13:53:02 ++ export PS1 13:53:02 ++ '[' -n /bin/bash -o -n '' ']' 13:53:02 ++ hash -r 13:53:02 + load_set 13:53:02 + _setopts=hxB 13:53:02 ++ echo braceexpand:hashall:interactive-comments:xtrace 13:53:02 ++ tr : ' ' 13:53:02 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:53:02 + set +o braceexpand 13:53:02 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:53:02 + set +o hashall 13:53:02 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:53:02 + set +o interactive-comments 13:53:02 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:53:02 + set +o xtrace 13:53:02 ++ echo hxB 13:53:02 ++ sed 's/./& /g' 13:53:02 + for i in $(echo "$_setopts" | sed 's/./& /g') 13:53:02 + set +h 13:53:02 + for i in $(echo "$_setopts" | sed 's/./& /g') 13:53:02 + set +x 13:53:02 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests 13:53:02 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests 13:53:02 + export TEST_OPTIONS= 13:53:02 + TEST_OPTIONS= 13:53:02 ++ mktemp -d 13:53:02 + WORKDIR=/tmp/tmp.vXEmLvt3D4 13:53:02 + cd /tmp/tmp.vXEmLvt3D4 13:53:02 + docker login -u docker -p docker nexus3.onap.org:10001 13:53:02 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 13:53:02 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 13:53:02 Configure a credential helper to remove this warning. See 13:53:02 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 13:53:02 13:53:02 Login Succeeded 13:53:02 + SETUP=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh 13:53:02 + '[' -f /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh ']' 13:53:02 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh' 13:53:02 Running setup script /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh 13:53:02 + source_safely /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh 13:53:02 + '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh ']' 13:53:02 + relax_set 13:53:02 + set +e 13:53:02 + set +o pipefail 13:53:02 + . /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh 13:53:02 ++ source /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/node-templates.sh 13:53:02 +++ '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' 13:53:02 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-verify-pap/.gitreview 13:53:02 +++ GERRIT_BRANCH=master 13:53:02 +++ echo GERRIT_BRANCH=master 13:53:02 GERRIT_BRANCH=master 13:53:02 +++ rm -rf /w/workspace/policy-pap-master-project-csit-verify-pap/models 13:53:02 +++ mkdir /w/workspace/policy-pap-master-project-csit-verify-pap/models 13:53:02 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-verify-pap/models 13:53:02 Cloning into '/w/workspace/policy-pap-master-project-csit-verify-pap/models'... 13:53:03 +++ export DATA=/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies 13:53:03 +++ DATA=/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies 13:53:03 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates 13:53:03 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates 13:53:03 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 13:53:03 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 13:53:03 ++ source /w/workspace/policy-pap-master-project-csit-verify-pap/compose/start-compose.sh apex-pdp --grafana 13:53:03 +++ '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' 13:53:03 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-verify-pap/compose 13:53:03 +++ grafana=false 13:53:03 +++ gui=false 13:53:03 +++ [[ 2 -gt 0 ]] 13:53:03 +++ key=apex-pdp 13:53:03 +++ case $key in 13:53:03 +++ echo apex-pdp 13:53:03 apex-pdp 13:53:03 +++ component=apex-pdp 13:53:03 +++ shift 13:53:03 +++ [[ 1 -gt 0 ]] 13:53:03 +++ key=--grafana 13:53:03 +++ case $key in 13:53:03 +++ grafana=true 13:53:03 +++ shift 13:53:03 +++ [[ 0 -gt 0 ]] 13:53:03 +++ cd /w/workspace/policy-pap-master-project-csit-verify-pap/compose 13:53:03 +++ echo 'Configuring docker compose...' 13:53:03 Configuring docker compose... 13:53:03 +++ source export-ports.sh 13:53:03 +++ source get-versions.sh 13:53:05 +++ '[' -z pap ']' 13:53:05 +++ '[' -n apex-pdp ']' 13:53:05 +++ '[' apex-pdp == logs ']' 13:53:05 +++ '[' true = true ']' 13:53:05 +++ echo 'Starting apex-pdp application with Grafana' 13:53:05 Starting apex-pdp application with Grafana 13:53:05 +++ docker-compose up -d apex-pdp grafana 13:53:06 Creating network "compose_default" with the default driver 13:53:07 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 13:53:07 latest: Pulling from prom/prometheus 13:53:10 Digest: sha256:beb5e30ffba08d9ae8a7961b9a2145fc8af6296ff2a4f463df7cd722fcbfc789 13:53:10 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 13:53:10 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 13:53:10 latest: Pulling from grafana/grafana 13:53:16 Digest: sha256:6b5b37eb35bbf30e7f64bd7f0fd41c0a5b7637f65d3bf93223b04a192b8bf3e2 13:53:16 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 13:53:16 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 13:53:16 10.10.2: Pulling from mariadb 13:53:22 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 13:53:22 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 13:53:22 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT)... 13:53:22 3.1.1-SNAPSHOT: Pulling from onap/policy-models-simulator 13:53:26 Digest: sha256:09b9abb94ede918d748d5f6ffece2e7592c9941527c37f3d00df286ee158ae05 13:53:26 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT 13:53:26 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 13:53:27 latest: Pulling from confluentinc/cp-zookeeper 13:54:17 Digest: sha256:000f1d11090f49fa8f67567e633bab4fea5dbd7d9119e7ee2ef259c509063593 13:54:18 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 13:54:18 Pulling kafka (confluentinc/cp-kafka:latest)... 13:54:22 latest: Pulling from confluentinc/cp-kafka 13:54:25 Digest: sha256:51145a40d23336a11085ca695d02bdeee66fe01b582837c6d223384952226be9 13:54:25 Status: Downloaded newer image for confluentinc/cp-kafka:latest 13:54:25 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT)... 13:54:26 3.1.1-SNAPSHOT: Pulling from onap/policy-db-migrator 13:54:35 Digest: sha256:eb47623eeab9aad8524ecc877b6708ae74b57f9f3cfe77554ad0d1521491cb5d 13:54:35 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT 13:54:35 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT)... 13:54:35 3.1.1-SNAPSHOT: Pulling from onap/policy-api 13:54:39 Digest: sha256:bbf3044dd101de99d940093be953f041397d02b2f17a70f8da7719c160735c2e 13:54:39 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT 13:54:39 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.1-SNAPSHOT)... 13:54:39 3.1.1-SNAPSHOT: Pulling from onap/policy-pap 13:54:41 Digest: sha256:8a0432281bb5edb6d25e3d0e62d78b6aebc2875f52ecd11259251b497208c04e 13:54:41 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.1-SNAPSHOT 13:54:41 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT)... 13:54:41 3.1.1-SNAPSHOT: Pulling from onap/policy-apex-pdp 13:54:47 Digest: sha256:0fdae8f3a73915cdeb896f38ac7d5b74e658832fd10929dcf3fe68219098b89b 13:54:47 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT 13:54:47 Creating compose_zookeeper_1 ... 13:54:47 Creating prometheus ... 13:54:47 Creating mariadb ... 13:54:47 Creating simulator ... 13:55:04 Creating prometheus ... done 13:55:04 Creating grafana ... 13:55:05 Creating grafana ... done 13:55:06 Creating simulator ... done 13:55:08 Creating mariadb ... done 13:55:08 Creating policy-db-migrator ... 13:55:09 Creating compose_zookeeper_1 ... done 13:55:09 Creating kafka ... 13:55:10 Creating policy-db-migrator ... done 13:55:10 Creating policy-api ... 13:55:11 Creating policy-api ... done 13:55:12 Creating kafka ... done 13:55:12 Creating policy-pap ... 13:55:13 Creating policy-pap ... done 13:55:13 Creating policy-apex-pdp ... 13:55:14 Creating policy-apex-pdp ... done 13:55:14 +++ echo 'Prometheus server: http://localhost:30259' 13:55:14 Prometheus server: http://localhost:30259 13:55:14 +++ echo 'Grafana server: http://localhost:30269' 13:55:14 Grafana server: http://localhost:30269 13:55:14 +++ cd /w/workspace/policy-pap-master-project-csit-verify-pap 13:55:14 ++ sleep 10 13:55:24 ++ unset http_proxy https_proxy 13:55:24 ++ bash /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 13:55:24 Waiting for REST to come up on localhost port 30003... 13:55:24 NAMES STATUS 13:55:24 policy-apex-pdp Up 10 seconds 13:55:24 policy-pap Up 11 seconds 13:55:24 policy-api Up 13 seconds 13:55:24 kafka Up 12 seconds 13:55:24 grafana Up 18 seconds 13:55:24 mariadb Up 16 seconds 13:55:24 prometheus Up 19 seconds 13:55:24 simulator Up 17 seconds 13:55:24 compose_zookeeper_1 Up 15 seconds 13:55:29 NAMES STATUS 13:55:29 policy-apex-pdp Up 15 seconds 13:55:29 policy-pap Up 16 seconds 13:55:29 policy-api Up 18 seconds 13:55:29 kafka Up 17 seconds 13:55:29 grafana Up 23 seconds 13:55:29 mariadb Up 21 seconds 13:55:29 prometheus Up 24 seconds 13:55:29 simulator Up 22 seconds 13:55:29 compose_zookeeper_1 Up 20 seconds 13:55:34 NAMES STATUS 13:55:34 policy-apex-pdp Up 20 seconds 13:55:34 policy-pap Up 21 seconds 13:55:34 policy-api Up 23 seconds 13:55:34 kafka Up 22 seconds 13:55:34 grafana Up 28 seconds 13:55:34 mariadb Up 26 seconds 13:55:34 prometheus Up 29 seconds 13:55:34 simulator Up 27 seconds 13:55:34 compose_zookeeper_1 Up 25 seconds 13:55:39 NAMES STATUS 13:55:39 policy-apex-pdp Up 25 seconds 13:55:39 policy-pap Up 26 seconds 13:55:39 policy-api Up 28 seconds 13:55:39 kafka Up 27 seconds 13:55:39 grafana Up 33 seconds 13:55:39 mariadb Up 31 seconds 13:55:39 prometheus Up 34 seconds 13:55:39 simulator Up 32 seconds 13:55:39 compose_zookeeper_1 Up 30 seconds 13:55:44 NAMES STATUS 13:55:44 policy-apex-pdp Up 30 seconds 13:55:44 policy-pap Up 31 seconds 13:55:44 policy-api Up 33 seconds 13:55:44 kafka Up 32 seconds 13:55:44 grafana Up 38 seconds 13:55:44 mariadb Up 36 seconds 13:55:44 prometheus Up 39 seconds 13:55:44 simulator Up 37 seconds 13:55:44 compose_zookeeper_1 Up 35 seconds 13:55:49 NAMES STATUS 13:55:49 policy-apex-pdp Up 35 seconds 13:55:49 policy-pap Up 36 seconds 13:55:49 policy-api Up 38 seconds 13:55:49 kafka Up 37 seconds 13:55:49 grafana Up 44 seconds 13:55:49 mariadb Up 41 seconds 13:55:49 prometheus Up 44 seconds 13:55:49 simulator Up 42 seconds 13:55:49 compose_zookeeper_1 Up 40 seconds 13:55:49 ++ export 'SUITES=pap-test.robot 13:55:49 pap-slas.robot' 13:55:49 ++ SUITES='pap-test.robot 13:55:49 pap-slas.robot' 13:55:49 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 13:55:49 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates' 13:55:49 + load_set 13:55:49 + _setopts=hxB 13:55:49 ++ echo braceexpand:hashall:interactive-comments:xtrace 13:55:49 ++ tr : ' ' 13:55:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:55:49 + set +o braceexpand 13:55:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:55:49 + set +o hashall 13:55:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:55:49 + set +o interactive-comments 13:55:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:55:49 + set +o xtrace 13:55:49 ++ echo hxB 13:55:49 ++ sed 's/./& /g' 13:55:49 + for i in $(echo "$_setopts" | sed 's/./& /g') 13:55:49 + set +h 13:55:49 + for i in $(echo "$_setopts" | sed 's/./& /g') 13:55:49 + set +x 13:55:49 + docker_stats 13:55:49 + tee /w/workspace/policy-pap-master-project-csit-verify-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 13:55:49 ++ uname -s 13:55:49 + '[' Linux == Darwin ']' 13:55:49 + sh -c 'top -bn1 | head -3' 13:55:50 top - 13:55:49 up 5 min, 0 users, load average: 3.33, 1.78, 0.77 13:55:50 Tasks: 200 total, 1 running, 131 sleeping, 0 stopped, 0 zombie 13:55:50 %Cpu(s): 10.7 us, 2.2 sy, 0.0 ni, 79.5 id, 7.5 wa, 0.0 hi, 0.1 si, 0.1 st 13:55:50 + echo 13:55:50 + sh -c 'free -h' 13:55:50 13:55:50 total used free shared buff/cache available 13:55:50 Mem: 31G 2.6G 22G 1.3M 6.5G 28G 13:55:50 Swap: 1.0G 0B 1.0G 13:55:50 + echo 13:55:50 13:55:50 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 13:55:50 NAMES STATUS 13:55:50 policy-apex-pdp Up 35 seconds 13:55:50 policy-pap Up 36 seconds 13:55:50 policy-api Up 39 seconds 13:55:50 kafka Up 38 seconds 13:55:50 grafana Up 44 seconds 13:55:50 mariadb Up 42 seconds 13:55:50 prometheus Up 45 seconds 13:55:50 simulator Up 43 seconds 13:55:50 compose_zookeeper_1 Up 41 seconds 13:55:50 + echo 13:55:50 13:55:50 + docker stats --no-stream 13:55:52 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 13:55:52 9c961486a35c policy-apex-pdp 1.91% 187.7MiB / 31.41GiB 0.58% 8.73kB / 8.27kB 0B / 0B 48 13:55:52 acbeea9b5acd policy-pap 3.26% 523.2MiB / 31.41GiB 1.63% 30.8kB / 34kB 0B / 180MB 61 13:55:52 0dae240d22d9 policy-api 0.47% 436.6MiB / 31.41GiB 1.36% 1MB / 737kB 0B / 0B 52 13:55:52 1b1ecfacb928 kafka 0.84% 381MiB / 31.41GiB 1.18% 74.6kB / 76.9kB 0B / 508kB 81 13:55:52 c25b2e6d31ff grafana 0.02% 53.89MiB / 31.41GiB 0.17% 19.6kB / 3.57kB 0B / 23.9MB 16 13:55:52 aea2f77a8936 mariadb 0.02% 102MiB / 31.41GiB 0.32% 996kB / 1.19MB 11MB / 48.5MB 38 13:55:52 16914ab7f65c prometheus 0.01% 18.48MiB / 31.41GiB 0.06% 28.7kB / 1.09kB 4.1kB / 0B 13 13:55:52 08c608326275 simulator 0.17% 122MiB / 31.41GiB 0.38% 1.31kB / 0B 0B / 0B 76 13:55:52 328e2f248c5c compose_zookeeper_1 0.11% 96.82MiB / 31.41GiB 0.30% 56.5kB / 49.9kB 98.3kB / 393kB 60 13:55:52 + echo 13:55:52 13:55:52 + cd /tmp/tmp.vXEmLvt3D4 13:55:52 + echo 'Reading the testplan:' 13:55:52 Reading the testplan: 13:55:52 + echo 'pap-test.robot 13:55:52 pap-slas.robot' 13:55:52 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 13:55:52 + sed 's|^|/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/|' 13:55:52 + cat testplan.txt 13:55:52 /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot 13:55:52 /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot 13:55:52 ++ xargs 13:55:52 + SUITES='/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot' 13:55:52 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 13:55:52 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates' 13:55:52 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 13:55:52 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates 13:55:52 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot ...' 13:55:52 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot ... 13:55:52 + relax_set 13:55:52 + set +e 13:55:52 + set +o pipefail 13:55:52 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot 13:55:53 ============================================================================== 13:55:53 pap 13:55:53 ============================================================================== 13:55:53 pap.Pap-Test 13:55:53 ============================================================================== 13:55:53 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 13:55:53 ------------------------------------------------------------------------------ 13:55:54 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 13:55:54 ------------------------------------------------------------------------------ 13:55:54 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 13:55:54 ------------------------------------------------------------------------------ 13:55:55 Healthcheck :: Verify policy pap health check | PASS | 13:55:55 ------------------------------------------------------------------------------ 13:56:15 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 13:56:15 ------------------------------------------------------------------------------ 13:56:16 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 13:56:16 ------------------------------------------------------------------------------ 13:56:16 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 13:56:16 ------------------------------------------------------------------------------ 13:56:16 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 13:56:16 ------------------------------------------------------------------------------ 13:56:17 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 13:56:17 ------------------------------------------------------------------------------ 13:56:17 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 13:56:17 ------------------------------------------------------------------------------ 13:56:17 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 13:56:17 ------------------------------------------------------------------------------ 13:56:17 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 13:56:17 ------------------------------------------------------------------------------ 13:56:17 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 13:56:17 ------------------------------------------------------------------------------ 13:56:18 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 13:56:18 ------------------------------------------------------------------------------ 13:56:18 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 13:56:18 ------------------------------------------------------------------------------ 13:56:18 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 13:56:18 ------------------------------------------------------------------------------ 13:56:18 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 13:56:18 ------------------------------------------------------------------------------ 13:56:39 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 13:56:39 ------------------------------------------------------------------------------ 13:56:39 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 13:56:39 ------------------------------------------------------------------------------ 13:56:39 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 13:56:39 ------------------------------------------------------------------------------ 13:56:39 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 13:56:39 ------------------------------------------------------------------------------ 13:56:39 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 13:56:39 ------------------------------------------------------------------------------ 13:56:39 pap.Pap-Test | PASS | 13:56:39 22 tests, 22 passed, 0 failed 13:56:39 ============================================================================== 13:56:39 pap.Pap-Slas 13:56:39 ============================================================================== 13:57:39 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 13:57:39 ------------------------------------------------------------------------------ 13:57:39 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 13:57:39 ------------------------------------------------------------------------------ 13:57:39 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 13:57:39 ------------------------------------------------------------------------------ 13:57:39 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 13:57:39 ------------------------------------------------------------------------------ 13:57:39 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 13:57:39 ------------------------------------------------------------------------------ 13:57:39 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 13:57:39 ------------------------------------------------------------------------------ 13:57:39 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 13:57:39 ------------------------------------------------------------------------------ 13:57:39 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 13:57:39 ------------------------------------------------------------------------------ 13:57:39 pap.Pap-Slas | PASS | 13:57:39 8 tests, 8 passed, 0 failed 13:57:39 ============================================================================== 13:57:39 pap | PASS | 13:57:39 30 tests, 30 passed, 0 failed 13:57:39 ============================================================================== 13:57:39 Output: /tmp/tmp.vXEmLvt3D4/output.xml 13:57:40 Log: /tmp/tmp.vXEmLvt3D4/log.html 13:57:40 Report: /tmp/tmp.vXEmLvt3D4/report.html 13:57:40 + RESULT=0 13:57:40 + load_set 13:57:40 + _setopts=hxB 13:57:40 ++ echo braceexpand:hashall:interactive-comments:xtrace 13:57:40 ++ tr : ' ' 13:57:40 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:57:40 + set +o braceexpand 13:57:40 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:57:40 + set +o hashall 13:57:40 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:57:40 + set +o interactive-comments 13:57:40 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:57:40 + set +o xtrace 13:57:40 ++ echo hxB 13:57:40 ++ sed 's/./& /g' 13:57:40 + for i in $(echo "$_setopts" | sed 's/./& /g') 13:57:40 + set +h 13:57:40 + for i in $(echo "$_setopts" | sed 's/./& /g') 13:57:40 + set +x 13:57:40 + echo 'RESULT: 0' 13:57:40 RESULT: 0 13:57:40 + exit 0 13:57:40 + on_exit 13:57:40 + rc=0 13:57:40 + [[ -n /w/workspace/policy-pap-master-project-csit-verify-pap ]] 13:57:40 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 13:57:40 NAMES STATUS 13:57:40 policy-apex-pdp Up 2 minutes 13:57:40 policy-pap Up 2 minutes 13:57:40 policy-api Up 2 minutes 13:57:40 kafka Up 2 minutes 13:57:40 grafana Up 2 minutes 13:57:40 mariadb Up 2 minutes 13:57:40 prometheus Up 2 minutes 13:57:40 simulator Up 2 minutes 13:57:40 compose_zookeeper_1 Up 2 minutes 13:57:40 + docker_stats 13:57:40 ++ uname -s 13:57:40 + '[' Linux == Darwin ']' 13:57:40 + sh -c 'top -bn1 | head -3' 13:57:40 top - 13:57:40 up 7 min, 0 users, load average: 0.97, 1.48, 0.78 13:57:40 Tasks: 198 total, 1 running, 129 sleeping, 0 stopped, 0 zombie 13:57:40 %Cpu(s): 9.2 us, 1.8 sy, 0.0 ni, 82.7 id, 6.1 wa, 0.0 hi, 0.1 si, 0.1 st 13:57:40 + echo 13:57:40 13:57:40 + sh -c 'free -h' 13:57:40 total used free shared buff/cache available 13:57:40 Mem: 31G 2.7G 22G 1.3M 6.5G 28G 13:57:40 Swap: 1.0G 0B 1.0G 13:57:40 + echo 13:57:40 13:57:40 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 13:57:40 NAMES STATUS 13:57:40 policy-apex-pdp Up 2 minutes 13:57:40 policy-pap Up 2 minutes 13:57:40 policy-api Up 2 minutes 13:57:40 kafka Up 2 minutes 13:57:40 grafana Up 2 minutes 13:57:40 mariadb Up 2 minutes 13:57:40 prometheus Up 2 minutes 13:57:40 simulator Up 2 minutes 13:57:40 compose_zookeeper_1 Up 2 minutes 13:57:40 + echo 13:57:40 13:57:40 + docker stats --no-stream 13:57:42 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 13:57:42 9c961486a35c policy-apex-pdp 0.34% 185.7MiB / 31.41GiB 0.58% 57.1kB / 91.8kB 0B / 0B 50 13:57:42 acbeea9b5acd policy-pap 24.73% 486.1MiB / 31.41GiB 1.51% 2.34MB / 821kB 0B / 180MB 66 13:57:42 0dae240d22d9 policy-api 0.12% 472.7MiB / 31.41GiB 1.47% 2.49MB / 1.29MB 0B / 0B 53 13:57:42 1b1ecfacb928 kafka 1.52% 397.1MiB / 31.41GiB 1.23% 245kB / 219kB 0B / 606kB 83 13:57:42 c25b2e6d31ff grafana 0.01% 50.12MiB / 31.41GiB 0.16% 20.6kB / 4.61kB 0B / 23.9MB 16 13:57:42 aea2f77a8936 mariadb 0.01% 103.3MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11MB / 48.8MB 28 13:57:42 16914ab7f65c prometheus 0.20% 24.74MiB / 31.41GiB 0.08% 220kB / 11.7kB 4.1kB / 0B 13 13:57:42 08c608326275 simulator 0.09% 122MiB / 31.41GiB 0.38% 1.58kB / 0B 0B / 0B 76 13:57:42 328e2f248c5c compose_zookeeper_1 0.10% 96.84MiB / 31.41GiB 0.30% 59.4kB / 51.4kB 98.3kB / 393kB 60 13:57:42 + echo 13:57:42 13:57:42 + source_safely /w/workspace/policy-pap-master-project-csit-verify-pap/compose/stop-compose.sh 13:57:42 + '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap/compose/stop-compose.sh ']' 13:57:42 + relax_set 13:57:42 + set +e 13:57:42 + set +o pipefail 13:57:42 + . /w/workspace/policy-pap-master-project-csit-verify-pap/compose/stop-compose.sh 13:57:42 ++ echo 'Shut down started!' 13:57:42 Shut down started! 13:57:42 ++ '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' 13:57:42 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-verify-pap/compose 13:57:42 ++ cd /w/workspace/policy-pap-master-project-csit-verify-pap/compose 13:57:42 ++ source export-ports.sh 13:57:42 ++ source get-versions.sh 13:57:44 ++ echo 'Collecting logs from docker compose containers...' 13:57:44 Collecting logs from docker compose containers... 13:57:44 ++ docker-compose logs 13:57:46 ++ cat docker_compose.log 13:57:46 Attaching to policy-apex-pdp, policy-pap, policy-api, kafka, policy-db-migrator, grafana, mariadb, prometheus, simulator, compose_zookeeper_1 13:57:46 grafana | logger=settings t=2024-01-22T13:55:05.84993818Z level=info msg="Starting Grafana" version=10.2.3 commit=1e84fede543acc892d2a2515187e545eb047f237 branch=HEAD compiled=2023-12-18T15:46:07Z 13:57:46 grafana | logger=settings t=2024-01-22T13:55:05.850147657Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 13:57:46 grafana | logger=settings t=2024-01-22T13:55:05.850155297Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 13:57:46 grafana | logger=settings t=2024-01-22T13:55:05.850158877Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 13:57:46 grafana | logger=settings t=2024-01-22T13:55:05.850161937Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 13:57:46 grafana | logger=settings t=2024-01-22T13:55:05.850165137Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 13:57:46 grafana | logger=settings t=2024-01-22T13:55:05.850169418Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 13:57:46 grafana | logger=settings t=2024-01-22T13:55:05.850172778Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 13:57:46 grafana | logger=settings t=2024-01-22T13:55:05.850177388Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 13:57:46 grafana | logger=settings t=2024-01-22T13:55:05.850181948Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 13:57:46 grafana | logger=settings t=2024-01-22T13:55:05.850185498Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 13:57:46 grafana | logger=settings t=2024-01-22T13:55:05.850189978Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 13:57:46 grafana | logger=settings t=2024-01-22T13:55:05.850195428Z level=info msg=Target target=[all] 13:57:46 grafana | logger=settings t=2024-01-22T13:55:05.850207289Z level=info msg="Path Home" path=/usr/share/grafana 13:57:46 grafana | logger=settings t=2024-01-22T13:55:05.850210919Z level=info msg="Path Data" path=/var/lib/grafana 13:57:46 grafana | logger=settings t=2024-01-22T13:55:05.850213779Z level=info msg="Path Logs" path=/var/log/grafana 13:57:46 grafana | logger=settings t=2024-01-22T13:55:05.850217059Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 13:57:46 grafana | logger=settings t=2024-01-22T13:55:05.850221049Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 13:57:46 grafana | logger=settings t=2024-01-22T13:55:05.850225369Z level=info msg="App mode production" 13:57:46 grafana | logger=sqlstore t=2024-01-22T13:55:05.850524749Z level=info msg="Connecting to DB" dbtype=sqlite3 13:57:46 grafana | logger=sqlstore t=2024-01-22T13:55:05.85054342Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.851092348Z level=info msg="Starting DB migrations" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.851918856Z level=info msg="Executing migration" id="create migration_log table" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.85265958Z level=info msg="Migration successfully executed" id="create migration_log table" duration=740.294µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.875536161Z level=info msg="Executing migration" id="create user table" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.876617647Z level=info msg="Migration successfully executed" id="create user table" duration=1.081726ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.881059585Z level=info msg="Executing migration" id="add unique index user.login" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.882162032Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.106896ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.885394259Z level=info msg="Executing migration" id="add unique index user.email" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.886446184Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.051515ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.889419913Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.890060004Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=640.041µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.894475521Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.895061291Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=585.719µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.89803964Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.902400755Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.361766ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.905636362Z level=info msg="Executing migration" id="create user table v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.906321865Z level=info msg="Migration successfully executed" id="create user table v2" duration=688.593µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.910518055Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.911184427Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=665.992µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.91397964Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.914912461Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=931.401µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.917916851Z level=info msg="Executing migration" id="copy data_source v1 to v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.918528131Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=609.6µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.923506817Z level=info msg="Executing migration" id="Drop old table user_v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.923994693Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=492.277µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.927316713Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.92841009Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.092677ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.930941604Z level=info msg="Executing migration" id="Update user table charset" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.930965355Z level=info msg="Migration successfully executed" id="Update user table charset" duration=24.111µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.934017456Z level=info msg="Executing migration" id="Add last_seen_at column to user" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.935658521Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.640405ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.940501932Z level=info msg="Executing migration" id="Add missing user data" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.940689508Z level=info msg="Migration successfully executed" id="Add missing user data" duration=187.556µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.943437779Z level=info msg="Executing migration" id="Add is_disabled column to user" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.944529636Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.085807ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.947277497Z level=info msg="Executing migration" id="Add index user.login/user.email" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.947948049Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=670.242µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.951254869Z level=info msg="Executing migration" id="Add is_service_account column to user" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.953164753Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.909544ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.958264983Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.967660485Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=9.395002ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.970338444Z level=info msg="Executing migration" id="create temp user table v1-7" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.971016597Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=680.383µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.973912983Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.974631447Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=714.644µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.97892651Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.979596762Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=670.042µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.982330383Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.983004115Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=671.832µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.986099528Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.987306558Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.20736ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.992044446Z level=info msg="Executing migration" id="Update temp_user table charset" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.992082097Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=38.511µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.995505061Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:05.996559036Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.052085ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.000038742Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.00118146Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.147558ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.005892784Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.006536211Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=643.407µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.008613584Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.009248211Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=635.087µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.011915152Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.016375429Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=4.457807ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.020928839Z level=info msg="Executing migration" id="create temp_user v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.021683688Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=754.509µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.024452891Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.02516037Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=706.559µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.028034615Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.028742314Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=707.409µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.031414054Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.032085701Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=671.247µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.036592599Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.03775491Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.163951ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.040782039Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.041380725Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=598.346µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.044290761Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.045071132Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=779.891µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.049661272Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.050000171Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=338.809µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.051975893Z level=info msg="Executing migration" id="create star table" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.052558438Z level=info msg="Migration successfully executed" id="create star table" duration=582.665µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.055838174Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.057086307Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.247173ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.061815611Z level=info msg="Executing migration" id="create org table v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.062875279Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.059178ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.066927275Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.067616903Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=689.418µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.070297903Z level=info msg="Executing migration" id="create org_user table v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.070892739Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=589.936µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.073165439Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.073862187Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=696.608µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.077824111Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.078615111Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=790.59µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.081508687Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.082638567Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.1293ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.085742578Z level=info msg="Executing migration" id="Update org table charset" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.085782409Z level=info msg="Migration successfully executed" id="Update org table charset" duration=41.211µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.088066439Z level=info msg="Executing migration" id="Update org_user table charset" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.08810058Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=35.261µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.092202518Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.092392493Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=185.525µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.095393442Z level=info msg="Executing migration" id="create dashboard table" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.096517311Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.12337ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.100165557Z level=info msg="Executing migration" id="add index dashboard.account_id" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.101734868Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.568821ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.105179688Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.106463452Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.279324ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.11133605Z level=info msg="Executing migration" id="create dashboard_tag table" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.112048968Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=712.248µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.115348315Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.116264799Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=905.483µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.119405191Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.120553951Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.14839ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.125364008Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.136037987Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=10.67076ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.139504968Z level=info msg="Executing migration" id="create dashboard v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.140012492Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=510.944µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.197944131Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.199347798Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.403757ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.204332188Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.205454538Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.12363ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.208465817Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.208784925Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=319.158µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.212337038Z level=info msg="Executing migration" id="drop table dashboard_v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.213349375Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.015537ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.278176575Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.27837992Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=204.315µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.28180191Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.284657195Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.859615ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.288023363Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.289753649Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.730016ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.294024121Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.296700391Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=2.6748ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.300706006Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.301904207Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.197581ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.305556933Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.308899171Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=3.342198ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.313197253Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.313987234Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=789.711µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.317075585Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.317839875Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=764.39µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.320635719Z level=info msg="Executing migration" id="Update dashboard table charset" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.320662429Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=27.51µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.324320075Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.324345146Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=27.211µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.327298623Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.330367234Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.067761ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.335420296Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.338705512Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=3.289186ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.34280451Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.344739441Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.934331ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.347500843Z level=info msg="Executing migration" id="Add column uid in dashboard" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.349475545Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.015323ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.352619407Z level=info msg="Executing migration" id="Update uid column values in dashboard" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.352993857Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=393.58µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.355988376Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.357347341Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.358825ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.361253424Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.362373583Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.119599ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.365579097Z level=info msg="Executing migration" id="Update dashboard title length" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.365608288Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=29.951µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.368923755Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.369702495Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=778.26µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.373452044Z level=info msg="Executing migration" id="create dashboard_provisioning" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.374497051Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.049408ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.377778887Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.386049104Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=8.270877ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.389095284Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.389727261Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=635.496µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.394193968Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.394933387Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=735.629µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.39809139Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.399503567Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.411887ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.402715021Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.403005479Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=290.518µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.406384077Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.406928262Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=539.734µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.410724321Z level=info msg="Executing migration" id="Add check_sum column" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.4129721Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.238469ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.415674421Z level=info msg="Executing migration" id="Add index for dashboard_title" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.417638973Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.950471ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.420707043Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.420899758Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=192.805µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.423474576Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.42363871Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=163.494µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.425799907Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.426647469Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=847.643µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.429432622Z level=info msg="Executing migration" id="Add isPublic for dashboard" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.43165451Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.220838ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.434438043Z level=info msg="Executing migration" id="create data_source table" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.435340467Z level=info msg="Migration successfully executed" id="create data_source table" duration=901.954µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.43813424Z level=info msg="Executing migration" id="add index data_source.account_id" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.439416044Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.286804ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.442527245Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.443591123Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.064058ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.446437918Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.44804771Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.609532ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.451596643Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.452292781Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=695.948µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.455079344Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.462015086Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.923812ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.465049376Z level=info msg="Executing migration" id="create data_source table v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.465726764Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=676.228µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.468596229Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.469245726Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=646.517µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.472045669Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.473375364Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.326895ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.476444305Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.477454541Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=1.004176ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.480306776Z level=info msg="Executing migration" id="Add column with_credentials" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.482818112Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.510386ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.485672177Z level=info msg="Executing migration" id="Add secure json data column" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.488018518Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.345801ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.491054458Z level=info msg="Executing migration" id="Update data_source table charset" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.491090769Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=37.121µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.494080797Z level=info msg="Executing migration" id="Update initial version to 1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.494376735Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=298.968µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.496733427Z level=info msg="Executing migration" id="Add read_only data column" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.499435828Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.701491ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.502462077Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.502660272Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=198.645µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.504654555Z level=info msg="Executing migration" id="Update json_data with nulls" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.50485086Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=193.966µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.50754646Z level=info msg="Executing migration" id="Add uid column" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.510013315Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.462005ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.51286397Z level=info msg="Executing migration" id="Update uid value" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.513051815Z level=info msg="Migration successfully executed" id="Update uid value" duration=187.595µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.516514026Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 13:57:46 zookeeper_1 | ===> User 13:57:46 zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 13:57:46 zookeeper_1 | ===> Configuring ... 13:57:46 zookeeper_1 | ===> Running preflight checks ... 13:57:46 zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... 13:57:46 zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... 13:57:46 zookeeper_1 | ===> Launching ... 13:57:46 zookeeper_1 | ===> Launching zookeeper ... 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,791] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,798] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,798] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,798] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,798] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,799] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,799] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,799] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,799] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,801] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,801] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,801] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,801] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,801] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,801] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,801] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,812] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@55b53d44 (org.apache.zookeeper.server.ServerMetrics) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,815] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,815] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,817] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,827] INFO (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,827] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,827] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,827] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,827] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,827] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,827] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,827] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,828] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,828] INFO (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:host.name=328e2f248c5c (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.518610011Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=2.093305ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.521834505Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.52316742Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.332005ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.52621622Z level=info msg="Executing migration" id="create api_key table" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.52699324Z level=info msg="Migration successfully executed" id="create api_key table" duration=776.97µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.530057761Z level=info msg="Executing migration" id="add index api_key.account_id" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.530830981Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=772.56µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.53421089Z level=info msg="Executing migration" id="add index api_key.key" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.535869613Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.660493ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.539446977Z level=info msg="Executing migration" id="add index api_key.account_id_name" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.540977067Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.53305ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.546298487Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.548461863Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=2.719781ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.551671048Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.552686464Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.012026ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.555691533Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.55709893Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.406967ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.560057557Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.566549568Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=6.489131ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.569145866Z level=info msg="Executing migration" id="create api_key table v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.569631979Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=483.862µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.57234545Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.572923305Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=575.185µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.575648486Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.57618499Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=534.894µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.579295072Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.580560705Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.268363ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.583835171Z level=info msg="Executing migration" id="copy api_key v1 to v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.584305403Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=471.432µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.587005744Z level=info msg="Executing migration" id="Drop old table api_key_v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.587548498Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=542.554µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.590303401Z level=info msg="Executing migration" id="Update api_key table charset" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.590329041Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=26.53µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.592347804Z level=info msg="Executing migration" id="Add expires to api_key table" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.596574895Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=4.225661ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.599932273Z level=info msg="Executing migration" id="Add service account foreign key" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.602648714Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.711201ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.617753811Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.617965416Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=214.216µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.619917607Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.622574837Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.65653ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.62537489Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.627927857Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.551577ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.631013368Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.631722927Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=716.999µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.634746756Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.635309121Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=563.595µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.638141365Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.63907051Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=929.605µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.641945065Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.642843089Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=898.334µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.645692173Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,830] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,830] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,830] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,831] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,831] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,831] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,831] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,831] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,832] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,832] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,833] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,833] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,834] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,834] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,834] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,834] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,834] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,834] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,836] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,837] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,837] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.64670202Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.009857ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.649608256Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.650717755Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.109039ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.653748515Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.653832937Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=85.873µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.656057735Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.656103556Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=47.661µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.659289Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.662055632Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.767012ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.66500125Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.667057434Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.057144ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.669741844Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.669790925Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=49.241µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.671879Z level=info msg="Executing migration" id="create quota table v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.672584649Z level=info msg="Migration successfully executed" id="create quota table v1" duration=706.359µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.675384632Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.676241904Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=855.572µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.679218153Z level=info msg="Executing migration" id="Update quota table charset" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.679288994Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=72.222µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.682320744Z level=info msg="Executing migration" id="create plugin_setting table" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.683470604Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.14907ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.68673912Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.687637913Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=897.413µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.690912839Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.694433792Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.518502ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.697952744Z level=info msg="Executing migration" id="Update plugin_setting table charset" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.697997395Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=43.771µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.700685786Z level=info msg="Executing migration" id="create session table" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.701654471Z level=info msg="Migration successfully executed" id="create session table" duration=968.376µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.704845235Z level=info msg="Executing migration" id="Drop old table playlist table" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.704956698Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=113.373µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.707648778Z level=info msg="Executing migration" id="Drop old table playlist_item table" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.70773347Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=83.442µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.710277737Z level=info msg="Executing migration" id="create playlist table v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.711008946Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=728.939µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.713637695Z level=info msg="Executing migration" id="create playlist item table v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.714552259Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=914.104µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.717284811Z level=info msg="Executing migration" id="Update playlist table charset" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.717326992Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=43.311µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.720240398Z level=info msg="Executing migration" id="Update playlist_item table charset" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.720335401Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=96.043µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.723042372Z level=info msg="Executing migration" id="Add playlist column created_at" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.726236166Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.193004ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.73058172Z level=info msg="Executing migration" id="Add playlist column updated_at" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.734602605Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=4.017546ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.739819922Z level=info msg="Executing migration" id="drop preferences table v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.740208242Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=394.77µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.744039212Z level=info msg="Executing migration" id="drop preferences table v3" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.744359451Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=320.059µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.750545713Z level=info msg="Executing migration" id="create preferences table v3" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.751598771Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.052038ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.760494904Z level=info msg="Executing migration" id="Update preferences table charset" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.760577476Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=86.572µs 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,837] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,837] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,858] INFO Logging initialized @551ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,951] WARN o.e.j.s.ServletContextHandler@49c90a9c{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,951] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 13:57:46 zookeeper_1 | [2024-01-22 13:55:12,969] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,001] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,001] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,003] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,006] WARN ServletContext@o.e.j.s.ServletContextHandler@49c90a9c{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,016] INFO Started o.e.j.s.ServletContextHandler@49c90a9c{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,033] INFO Started ServerConnector@723ca036{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,033] INFO Started @727ms (org.eclipse.jetty.server.Server) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,033] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,038] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,039] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,040] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,041] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,064] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,065] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,066] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,066] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,071] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,071] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,074] INFO Snapshot loaded in 8 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,075] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,075] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,088] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,087] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,103] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 13:57:46 zookeeper_1 | [2024-01-22 13:55:13,104] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 13:57:46 zookeeper_1 | [2024-01-22 13:55:16,129] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.763651247Z level=info msg="Executing migration" id="Add column team_id in preferences" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.766968154Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.316047ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.769897791Z level=info msg="Executing migration" id="Update team_id column values in preferences" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.770146447Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=247.886µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.774304026Z level=info msg="Executing migration" id="Add column week_start in preferences" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.777740666Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.43355ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.780903599Z level=info msg="Executing migration" id="Add column preferences.json_data" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.784062682Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.157853ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.787488782Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.787649016Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=161.364µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.791635201Z level=info msg="Executing migration" id="Add preferences index org_id" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.792677798Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.042267ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.795925853Z level=info msg="Executing migration" id="Add preferences index user_id" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.796834607Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=908.394µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.799956409Z level=info msg="Executing migration" id="create alert table v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.800971506Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.013046ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.805179366Z level=info msg="Executing migration" id="add index alert org_id & id " 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.806209553Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.029147ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.810059354Z level=info msg="Executing migration" id="add index alert state" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.810949737Z level=info msg="Migration successfully executed" id="add index alert state" duration=888.213µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.815073115Z level=info msg="Executing migration" id="add index alert dashboard_id" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.815949568Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=876.733µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.819158733Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.81983215Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=672.628µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.822957032Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.823915297Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=958.445µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.826708181Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.827599154Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=890.134µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.832275737Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.844821956Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=12.546149ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.849303173Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.849823467Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=519.794µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.853857593Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.85489047Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.032268ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.858242898Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.858593077Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=349.63µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.862203461Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.862760506Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=556.875µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.865774915Z level=info msg="Executing migration" id="create alert_notification table v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.866413252Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=638.357µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.870243742Z level=info msg="Executing migration" id="Add column is_default" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.873942509Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.698117ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.879436713Z level=info msg="Executing migration" id="Add column frequency" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.88313542Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.697917ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.888732097Z level=info msg="Executing migration" id="Add column send_reminder" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.892890706Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=4.160969ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.897310162Z level=info msg="Executing migration" id="Add column disable_resolve_message" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.900942527Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.632205ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.903688919Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.90485832Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.171371ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.907866989Z level=info msg="Executing migration" id="Update alert table charset" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.907996212Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=129.123µs 13:57:46 kafka | ===> User 13:57:46 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 13:57:46 kafka | ===> Configuring ... 13:57:46 kafka | Running in Zookeeper mode... 13:57:46 kafka | ===> Running preflight checks ... 13:57:46 kafka | ===> Check if /var/lib/kafka/data is writable ... 13:57:46 kafka | ===> Check if Zookeeper is healthy ... 13:57:46 kafka | [2024-01-22 13:55:16,075] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:16,076] INFO Client environment:host.name=1b1ecfacb928 (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:16,076] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:16,076] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:16,076] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:16,076] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-metadata-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/kafka_2.13-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/kafka-raft-7.5.3-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.5.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.5.3.jar:/usr/share/java/cp-base-new/kafka-storage-7.5.3-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.5.3-ccs.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.5.3.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:16,076] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:16,076] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:16,076] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:16,076] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:16,076] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:16,076] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:16,076] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:16,076] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:16,076] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:16,076] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:16,077] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:16,077] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:16,079] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@62bd765 (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:16,082] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 13:57:46 kafka | [2024-01-22 13:55:16,086] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 13:57:46 kafka | [2024-01-22 13:55:16,092] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 13:57:46 kafka | [2024-01-22 13:55:16,107] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 13:57:46 kafka | [2024-01-22 13:55:16,107] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 13:57:46 kafka | [2024-01-22 13:55:16,116] INFO Socket connection established, initiating session, client: /172.17.0.8:55306, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 13:57:46 kafka | [2024-01-22 13:55:16,151] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000004db890000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 13:57:46 kafka | [2024-01-22 13:55:16,277] INFO EventThread shut down for session: 0x1000004db890000 (org.apache.zookeeper.ClientCnxn) 13:57:46 kafka | [2024-01-22 13:55:16,277] INFO Session: 0x1000004db890000 closed (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | Using log4j config /etc/kafka/log4j.properties 13:57:46 kafka | ===> Launching ... 13:57:46 kafka | ===> Launching kafka ... 13:57:46 kafka | [2024-01-22 13:55:16,977] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 13:57:46 kafka | [2024-01-22 13:55:17,298] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 13:57:46 kafka | [2024-01-22 13:55:17,370] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 13:57:46 kafka | [2024-01-22 13:55:17,371] INFO starting (kafka.server.KafkaServer) 13:57:46 kafka | [2024-01-22 13:55:17,371] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 13:57:46 kafka | [2024-01-22 13:55:17,385] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 13:57:46 kafka | [2024-01-22 13:55:17,389] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:17,389] INFO Client environment:host.name=1b1ecfacb928 (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:17,389] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:17,389] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:17,389] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:17,389] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:17,389] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:17,389] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:17,390] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:17,390] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:17,390] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:17,390] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:17,390] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:17,390] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:17,390] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:17,390] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:17,390] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:17,390] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:17,392] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@32193bea (org.apache.zookeeper.ZooKeeper) 13:57:46 kafka | [2024-01-22 13:55:17,396] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 13:57:46 kafka | [2024-01-22 13:55:17,401] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 13:57:46 kafka | [2024-01-22 13:55:17,402] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 13:57:46 kafka | [2024-01-22 13:55:17,407] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 13:57:46 kafka | [2024-01-22 13:55:17,413] INFO Socket connection established, initiating session, client: /172.17.0.8:55308, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 13:57:46 kafka | [2024-01-22 13:55:17,425] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000004db890001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 13:57:46 kafka | [2024-01-22 13:55:17,432] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 13:57:46 kafka | [2024-01-22 13:55:17,782] INFO Cluster ID = YXDHh3LaSIyP8FezJr0IvQ (kafka.server.KafkaServer) 13:57:46 kafka | [2024-01-22 13:55:17,786] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 13:57:46 kafka | [2024-01-22 13:55:17,837] INFO KafkaConfig values: 13:57:46 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 13:57:46 kafka | alter.config.policy.class.name = null 13:57:46 kafka | alter.log.dirs.replication.quota.window.num = 11 13:57:46 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 13:57:46 kafka | authorizer.class.name = 13:57:46 kafka | auto.create.topics.enable = true 13:57:46 kafka | auto.include.jmx.reporter = true 13:57:46 kafka | auto.leader.rebalance.enable = true 13:57:46 kafka | background.threads = 10 13:57:46 kafka | broker.heartbeat.interval.ms = 2000 13:57:46 kafka | broker.id = 1 13:57:46 kafka | broker.id.generation.enable = true 13:57:46 kafka | broker.rack = null 13:57:46 kafka | broker.session.timeout.ms = 9000 13:57:46 kafka | client.quota.callback.class = null 13:57:46 kafka | compression.type = producer 13:57:46 kafka | connection.failed.authentication.delay.ms = 100 13:57:46 kafka | connections.max.idle.ms = 600000 13:57:46 kafka | connections.max.reauth.ms = 0 13:57:46 kafka | control.plane.listener.name = null 13:57:46 kafka | controlled.shutdown.enable = true 13:57:46 kafka | controlled.shutdown.max.retries = 3 13:57:46 kafka | controlled.shutdown.retry.backoff.ms = 5000 13:57:46 kafka | controller.listener.names = null 13:57:46 kafka | controller.quorum.append.linger.ms = 25 13:57:46 kafka | controller.quorum.election.backoff.max.ms = 1000 13:57:46 kafka | controller.quorum.election.timeout.ms = 1000 13:57:46 kafka | controller.quorum.fetch.timeout.ms = 2000 13:57:46 kafka | controller.quorum.request.timeout.ms = 2000 13:57:46 kafka | controller.quorum.retry.backoff.ms = 20 13:57:46 kafka | controller.quorum.voters = [] 13:57:46 kafka | controller.quota.window.num = 11 13:57:46 kafka | controller.quota.window.size.seconds = 1 13:57:46 kafka | controller.socket.timeout.ms = 30000 13:57:46 kafka | create.topic.policy.class.name = null 13:57:46 kafka | default.replication.factor = 1 13:57:46 kafka | delegation.token.expiry.check.interval.ms = 3600000 13:57:46 kafka | delegation.token.expiry.time.ms = 86400000 13:57:46 kafka | delegation.token.master.key = null 13:57:46 kafka | delegation.token.max.lifetime.ms = 604800000 13:57:46 kafka | delegation.token.secret.key = null 13:57:46 kafka | delete.records.purgatory.purge.interval.requests = 1 13:57:46 kafka | delete.topic.enable = true 13:57:46 kafka | early.start.listeners = null 13:57:46 kafka | fetch.max.bytes = 57671680 13:57:46 kafka | fetch.purgatory.purge.interval.requests = 1000 13:57:46 kafka | group.consumer.assignors = [] 13:57:46 kafka | group.consumer.heartbeat.interval.ms = 5000 13:57:46 kafka | group.consumer.max.heartbeat.interval.ms = 15000 13:57:46 kafka | group.consumer.max.session.timeout.ms = 60000 13:57:46 kafka | group.consumer.max.size = 2147483647 13:57:46 kafka | group.consumer.min.heartbeat.interval.ms = 5000 13:57:46 kafka | group.consumer.min.session.timeout.ms = 45000 13:57:46 kafka | group.consumer.session.timeout.ms = 45000 13:57:46 kafka | group.coordinator.new.enable = false 13:57:46 kafka | group.coordinator.threads = 1 13:57:46 kafka | group.initial.rebalance.delay.ms = 3000 13:57:46 kafka | group.max.session.timeout.ms = 1800000 13:57:46 kafka | group.max.size = 2147483647 13:57:46 kafka | group.min.session.timeout.ms = 6000 13:57:46 kafka | initial.broker.registration.timeout.ms = 60000 13:57:46 kafka | inter.broker.listener.name = PLAINTEXT 13:57:46 kafka | inter.broker.protocol.version = 3.5-IV2 13:57:46 kafka | kafka.metrics.polling.interval.secs = 10 13:57:46 kafka | kafka.metrics.reporters = [] 13:57:46 kafka | leader.imbalance.check.interval.seconds = 300 13:57:46 kafka | leader.imbalance.per.broker.percentage = 10 13:57:46 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 13:57:46 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 13:57:46 kafka | log.cleaner.backoff.ms = 15000 13:57:46 kafka | log.cleaner.dedupe.buffer.size = 134217728 13:57:46 kafka | log.cleaner.delete.retention.ms = 86400000 13:57:46 kafka | log.cleaner.enable = true 13:57:46 kafka | log.cleaner.io.buffer.load.factor = 0.9 13:57:46 kafka | log.cleaner.io.buffer.size = 524288 13:57:46 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 13:57:46 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 13:57:46 kafka | log.cleaner.min.cleanable.ratio = 0.5 13:57:46 kafka | log.cleaner.min.compaction.lag.ms = 0 13:57:46 kafka | log.cleaner.threads = 1 13:57:46 kafka | log.cleanup.policy = [delete] 13:57:46 kafka | log.dir = /tmp/kafka-logs 13:57:46 kafka | log.dirs = /var/lib/kafka/data 13:57:46 kafka | log.flush.interval.messages = 9223372036854775807 13:57:46 kafka | log.flush.interval.ms = null 13:57:46 kafka | log.flush.offset.checkpoint.interval.ms = 60000 13:57:46 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 13:57:46 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 13:57:46 kafka | log.index.interval.bytes = 4096 13:57:46 kafka | log.index.size.max.bytes = 10485760 13:57:46 kafka | log.message.downconversion.enable = true 13:57:46 kafka | log.message.format.version = 3.0-IV1 13:57:46 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 13:57:46 kafka | log.message.timestamp.type = CreateTime 13:57:46 kafka | log.preallocate = false 13:57:46 kafka | log.retention.bytes = -1 13:57:46 kafka | log.retention.check.interval.ms = 300000 13:57:46 kafka | log.retention.hours = 168 13:57:46 kafka | log.retention.minutes = null 13:57:46 kafka | log.retention.ms = null 13:57:46 kafka | log.roll.hours = 168 13:57:46 kafka | log.roll.jitter.hours = 0 13:57:46 kafka | log.roll.jitter.ms = null 13:57:46 kafka | log.roll.ms = null 13:57:46 kafka | log.segment.bytes = 1073741824 13:57:46 kafka | log.segment.delete.delay.ms = 60000 13:57:46 kafka | max.connection.creation.rate = 2147483647 13:57:46 kafka | max.connections = 2147483647 13:57:46 kafka | max.connections.per.ip = 2147483647 13:57:46 kafka | max.connections.per.ip.overrides = 13:57:46 kafka | max.incremental.fetch.session.cache.slots = 1000 13:57:46 kafka | message.max.bytes = 1048588 13:57:46 kafka | metadata.log.dir = null 13:57:46 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 13:57:46 kafka | metadata.log.max.snapshot.interval.ms = 3600000 13:57:46 kafka | metadata.log.segment.bytes = 1073741824 13:57:46 kafka | metadata.log.segment.min.bytes = 8388608 13:57:46 kafka | metadata.log.segment.ms = 604800000 13:57:46 kafka | metadata.max.idle.interval.ms = 500 13:57:46 kafka | metadata.max.retention.bytes = 104857600 13:57:46 kafka | metadata.max.retention.ms = 604800000 13:57:46 kafka | metric.reporters = [] 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.912005217Z level=info msg="Executing migration" id="Update alert_notification table charset" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.91209347Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=87.463µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.915424377Z level=info msg="Executing migration" id="create notification_journal table v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.91666517Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.240613ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.919893134Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.920945882Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.052208ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.924989428Z level=info msg="Executing migration" id="drop alert_notification_journal" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.925897202Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=908.154µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.929076625Z level=info msg="Executing migration" id="create alert_notification_state table v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.929913737Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=837.142µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.93308533Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.934093877Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.008277ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.937619769Z level=info msg="Executing migration" id="Add for to alert table" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.941515221Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.895202ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.94451332Z level=info msg="Executing migration" id="Add column uid in alert_notification" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.948131405Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.619685ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.951212926Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:06.951514874Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=302.048µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.010582393Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.01239619Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.816427ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.017290289Z level=info msg="Executing migration" id="Remove unique index org_id_name" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.018266914Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=976.655µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.021055957Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.02533418Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=4.276962ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.029190851Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.029342645Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=150.144µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.03258191Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.033480533Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=898.373µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.036325438Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.037289083Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=962.855µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.041154534Z level=info msg="Executing migration" id="Drop old annotation table v4" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.041496143Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=169.295µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.044706098Z level=info msg="Executing migration" id="create annotation table v5" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.045588491Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=881.703µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.048546108Z level=info msg="Executing migration" id="add index annotation 0 v3" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.049443062Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=896.744µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.053236991Z level=info msg="Executing migration" id="add index annotation 1 v3" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.054145445Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=908.384µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.057155344Z level=info msg="Executing migration" id="add index annotation 2 v3" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.058069708Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=914.294µs 13:57:46 mariadb | 2024-01-22 13:55:08+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 13:57:46 mariadb | 2024-01-22 13:55:08+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 13:57:46 mariadb | 2024-01-22 13:55:08+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 13:57:46 mariadb | 2024-01-22 13:55:08+00:00 [Note] [Entrypoint]: Initializing database files 13:57:46 mariadb | 2024-01-22 13:55:08 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 13:57:46 mariadb | 2024-01-22 13:55:08 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 13:57:46 mariadb | 2024-01-22 13:55:08 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 13:57:46 mariadb | 13:57:46 mariadb | 13:57:46 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 13:57:46 mariadb | To do so, start the server, then issue the following command: 13:57:46 mariadb | 13:57:46 mariadb | '/usr/bin/mysql_secure_installation' 13:57:46 mariadb | 13:57:46 mariadb | which will also give you the option of removing the test 13:57:46 mariadb | databases and anonymous user created by default. This is 13:57:46 mariadb | strongly recommended for production servers. 13:57:46 mariadb | 13:57:46 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 13:57:46 mariadb | 13:57:46 mariadb | Please report any problems at https://mariadb.org/jira 13:57:46 mariadb | 13:57:46 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 13:57:46 mariadb | 13:57:46 mariadb | Consider joining MariaDB's strong and vibrant community: 13:57:46 mariadb | https://mariadb.org/get-involved/ 13:57:46 mariadb | 13:57:46 mariadb | 2024-01-22 13:55:10+00:00 [Note] [Entrypoint]: Database files initialized 13:57:46 mariadb | 2024-01-22 13:55:10+00:00 [Note] [Entrypoint]: Starting temporary server 13:57:46 mariadb | 2024-01-22 13:55:10+00:00 [Note] [Entrypoint]: Waiting for server startup 13:57:46 mariadb | 2024-01-22 13:55:10 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 95 ... 13:57:46 mariadb | 2024-01-22 13:55:10 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 13:57:46 mariadb | 2024-01-22 13:55:10 0 [Note] InnoDB: Number of transaction pools: 1 13:57:46 mariadb | 2024-01-22 13:55:10 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 13:57:46 mariadb | 2024-01-22 13:55:10 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 13:57:46 mariadb | 2024-01-22 13:55:10 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 13:57:46 mariadb | 2024-01-22 13:55:10 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 13:57:46 mariadb | 2024-01-22 13:55:10 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 13:57:46 mariadb | 2024-01-22 13:55:10 0 [Note] InnoDB: Completed initialization of buffer pool 13:57:46 mariadb | 2024-01-22 13:55:10 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 13:57:46 mariadb | 2024-01-22 13:55:10 0 [Note] InnoDB: 128 rollback segments are active. 13:57:46 mariadb | 2024-01-22 13:55:10 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 13:57:46 mariadb | 2024-01-22 13:55:10 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 13:57:46 mariadb | 2024-01-22 13:55:10 0 [Note] InnoDB: log sequence number 46574; transaction id 14 13:57:46 mariadb | 2024-01-22 13:55:10 0 [Note] Plugin 'FEEDBACK' is disabled. 13:57:46 mariadb | 2024-01-22 13:55:10 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 13:57:46 mariadb | 2024-01-22 13:55:10 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 13:57:46 mariadb | 2024-01-22 13:55:10 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 13:57:46 mariadb | 2024-01-22 13:55:10 0 [Note] mariadbd: ready for connections. 13:57:46 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 13:57:46 mariadb | 2024-01-22 13:55:11+00:00 [Note] [Entrypoint]: Temporary server started. 13:57:46 mariadb | 2024-01-22 13:55:13+00:00 [Note] [Entrypoint]: Creating user policy_user 13:57:46 mariadb | 2024-01-22 13:55:13+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 13:57:46 mariadb | 13:57:46 mariadb | 13:57:46 mariadb | 2024-01-22 13:55:13+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 13:57:46 mariadb | 2024-01-22 13:55:13+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 13:57:46 mariadb | #!/bin/bash -xv 13:57:46 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 13:57:46 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 13:57:46 mariadb | # 13:57:46 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 13:57:46 mariadb | # you may not use this file except in compliance with the License. 13:57:46 mariadb | # You may obtain a copy of the License at 13:57:46 mariadb | # 13:57:46 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 13:57:46 mariadb | # 13:57:46 mariadb | # Unless required by applicable law or agreed to in writing, software 13:57:46 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 13:57:46 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13:57:46 mariadb | # See the License for the specific language governing permissions and 13:57:46 mariadb | # limitations under the License. 13:57:46 mariadb | 13:57:46 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 13:57:46 mariadb | do 13:57:46 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 13:57:46 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 13:57:46 mariadb | done 13:57:46 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 13:57:46 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 13:57:46 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 13:57:46 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 13:57:46 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 13:57:46 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 13:57:46 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 13:57:46 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 13:57:46 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 13:57:46 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 13:57:46 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 13:57:46 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 13:57:46 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 13:57:46 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 13:57:46 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 13:57:46 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 13:57:46 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 13:57:46 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 13:57:46 mariadb | 13:57:46 kafka | metrics.num.samples = 2 13:57:46 kafka | metrics.recording.level = INFO 13:57:46 kafka | metrics.sample.window.ms = 30000 13:57:46 kafka | min.insync.replicas = 1 13:57:46 kafka | node.id = 1 13:57:46 kafka | num.io.threads = 8 13:57:46 kafka | num.network.threads = 3 13:57:46 kafka | num.partitions = 1 13:57:46 kafka | num.recovery.threads.per.data.dir = 1 13:57:46 kafka | num.replica.alter.log.dirs.threads = null 13:57:46 kafka | num.replica.fetchers = 1 13:57:46 kafka | offset.metadata.max.bytes = 4096 13:57:46 kafka | offsets.commit.required.acks = -1 13:57:46 kafka | offsets.commit.timeout.ms = 5000 13:57:46 kafka | offsets.load.buffer.size = 5242880 13:57:46 kafka | offsets.retention.check.interval.ms = 600000 13:57:46 kafka | offsets.retention.minutes = 10080 13:57:46 kafka | offsets.topic.compression.codec = 0 13:57:46 kafka | offsets.topic.num.partitions = 50 13:57:46 kafka | offsets.topic.replication.factor = 1 13:57:46 kafka | offsets.topic.segment.bytes = 104857600 13:57:46 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 13:57:46 kafka | password.encoder.iterations = 4096 13:57:46 kafka | password.encoder.key.length = 128 13:57:46 kafka | password.encoder.keyfactory.algorithm = null 13:57:46 kafka | password.encoder.old.secret = null 13:57:46 kafka | password.encoder.secret = null 13:57:46 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 13:57:46 kafka | process.roles = [] 13:57:46 kafka | producer.id.expiration.check.interval.ms = 600000 13:57:46 kafka | producer.id.expiration.ms = 86400000 13:57:46 kafka | producer.purgatory.purge.interval.requests = 1000 13:57:46 kafka | queued.max.request.bytes = -1 13:57:46 kafka | queued.max.requests = 500 13:57:46 kafka | quota.window.num = 11 13:57:46 kafka | quota.window.size.seconds = 1 13:57:46 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 13:57:46 kafka | remote.log.manager.task.interval.ms = 30000 13:57:46 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 13:57:46 kafka | remote.log.manager.task.retry.backoff.ms = 500 13:57:46 kafka | remote.log.manager.task.retry.jitter = 0.2 13:57:46 kafka | remote.log.manager.thread.pool.size = 10 13:57:46 kafka | remote.log.metadata.manager.class.name = null 13:57:46 kafka | remote.log.metadata.manager.class.path = null 13:57:46 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 13:57:46 kafka | remote.log.metadata.manager.impl.prefix = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.061025966Z level=info msg="Executing migration" id="add index annotation 3 v3" 13:57:46 policy-apex-pdp | Waiting for mariadb port 3306... 13:57:46 policy-api | Waiting for mariadb port 3306... 13:57:46 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 13:57:46 kafka | remote.log.metadata.manager.listener.name = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.062072653Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.046327ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.065897403Z level=info msg="Executing migration" id="add index annotation 4 v3" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.066940061Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.042218ms 13:57:46 policy-apex-pdp | mariadb (172.17.0.5:3306) open 13:57:46 policy-api | mariadb (172.17.0.5:3306) open 13:57:46 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 13:57:46 policy-pap | Waiting for mariadb port 3306... 13:57:46 policy-pap | mariadb (172.17.0.5:3306) open 13:57:46 kafka | remote.log.reader.max.pending.tasks = 100 13:57:46 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.069676182Z level=info msg="Executing migration" id="Update annotation table charset" 13:57:46 policy-apex-pdp | Waiting for kafka port 9092... 13:57:46 policy-api | Waiting for policy-db-migrator port 6824... 13:57:46 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 13:57:46 prometheus | ts=2024-01-22T13:55:04.882Z caller=main.go:544 level=info msg="No time or size retention was set so using the default time retention" duration=15d 13:57:46 policy-pap | Waiting for kafka port 9092... 13:57:46 kafka | remote.log.reader.threads = 10 13:57:46 simulator | overriding logback.xml 13:57:46 policy-db-migrator | Waiting for mariadb port 3306... 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.069777925Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=101.203µs 13:57:46 policy-apex-pdp | kafka (172.17.0.8:9092) open 13:57:46 policy-api | policy-db-migrator (172.17.0.7:6824) open 13:57:46 mariadb | 13:57:46 prometheus | ts=2024-01-22T13:55:04.882Z caller=main.go:588 level=info msg="Starting Prometheus Server" mode=server version="(version=2.49.1, branch=HEAD, revision=43e14844a33b65e2a396e3944272af8b3a494071)" 13:57:46 policy-pap | kafka (172.17.0.8:9092) open 13:57:46 kafka | remote.log.storage.manager.class.name = null 13:57:46 simulator | 2024-01-22 13:55:07,470 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 13:57:46 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.071803258Z level=info msg="Executing migration" id="Add column region_id to annotation table" 13:57:46 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 13:57:46 mariadb | 2024-01-22 13:55:14+00:00 [Note] [Entrypoint]: Stopping temporary server 13:57:46 prometheus | ts=2024-01-22T13:55:04.882Z caller=main.go:593 level=info build_context="(go=go1.21.6, platform=linux/amd64, user=root@6d5f4c649d25, date=20240115-16:58:43, tags=netgo,builtinassets,stringlabels)" 13:57:46 policy-pap | Waiting for api port 6969... 13:57:46 kafka | remote.log.storage.manager.class.path = null 13:57:46 simulator | 2024-01-22 13:55:07,556 INFO org.onap.policy.models.simulators starting 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.075869415Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.065367ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.079422938Z level=info msg="Executing migration" id="Drop category_id index" 13:57:46 policy-apex-pdp | Waiting for pap port 6969... 13:57:46 policy-api | 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 13:57:46 prometheus | ts=2024-01-22T13:55:04.882Z caller=main.go:594 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 13:57:46 policy-pap | api (172.17.0.9:6969) open 13:57:46 kafka | remote.log.storage.manager.impl.prefix = null 13:57:46 simulator | 2024-01-22 13:55:07,556 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.080360023Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=935.905µs 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.083050913Z level=info msg="Executing migration" id="Add column tags to annotation table" 13:57:46 policy-apex-pdp | pap (172.17.0.10:6969) open 13:57:46 policy-api | . ____ _ __ _ _ 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: FTS optimize thread exiting. 13:57:46 prometheus | ts=2024-01-22T13:55:04.882Z caller=main.go:595 level=info fd_limits="(soft=1048576, hard=1048576)" 13:57:46 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 13:57:46 kafka | remote.log.storage.system.enable = false 13:57:46 simulator | 2024-01-22 13:55:07,784 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.086952105Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.900972ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.133236759Z level=info msg="Executing migration" id="Create annotation_tag table v2" 13:57:46 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.256+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Starting shutdown... 13:57:46 prometheus | ts=2024-01-22T13:55:04.882Z caller=main.go:596 level=info vm_limits="(soft=unlimited, hard=unlimited)" 13:57:46 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 13:57:46 kafka | replica.fetch.backoff.ms = 1000 13:57:46 simulator | 2024-01-22 13:55:07,785 INFO org.onap.policy.models.simulators starting A&AI simulator 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.134574764Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=1.340335ms 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.13898171Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 13:57:46 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.501+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 13:57:46 prometheus | ts=2024-01-22T13:55:04.883Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 13:57:46 policy-pap | 13:57:46 kafka | replica.fetch.max.bytes = 1048576 13:57:46 simulator | 2024-01-22 13:55:07,883 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 13:57:46 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.141189798Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=2.208738ms 13:57:46 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 13:57:46 policy-apex-pdp | allow.auto.create.topics = true 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Buffer pool(s) dump completed at 240122 13:55:14 13:57:46 prometheus | ts=2024-01-22T13:55:04.884Z caller=main.go:1039 level=info msg="Starting TSDB ..." 13:57:46 policy-pap | . ____ _ __ _ _ 13:57:46 kafka | replica.fetch.min.bytes = 1 13:57:46 simulator | 2024-01-22 13:55:07,894 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:57:46 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.144965227Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 13:57:46 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 13:57:46 policy-apex-pdp | auto.commit.interval.ms = 5000 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 13:57:46 prometheus | ts=2024-01-22T13:55:04.887Z caller=tls_config.go:274 level=info component=web msg="Listening on" address=[::]:9090 13:57:46 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 13:57:46 kafka | replica.fetch.response.max.bytes = 10485760 13:57:46 simulator | 2024-01-22 13:55:07,897 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:57:46 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.145910542Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=945.295µs 13:57:46 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 13:57:46 policy-apex-pdp | auto.include.jmx.reporter = true 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Shutdown completed; log sequence number 347209; transaction id 298 13:57:46 prometheus | ts=2024-01-22T13:55:04.887Z caller=tls_config.go:277 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 13:57:46 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 13:57:46 kafka | replica.fetch.wait.max.ms = 500 13:57:46 simulator | 2024-01-22 13:55:07,900 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 13:57:46 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.149606109Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 13:57:46 policy-api | =========|_|==============|___/=/_/_/_/ 13:57:46 policy-apex-pdp | auto.offset.reset = latest 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] mariadbd: Shutdown complete 13:57:46 prometheus | ts=2024-01-22T13:55:04.893Z caller=head.go:606 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 13:57:46 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 13:57:46 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 13:57:46 simulator | 2024-01-22 13:55:07,961 INFO Session workerName=node0 13:57:46 policy-db-migrator | Connection to mariadb (172.17.0.5) 3306 port [tcp/mysql] succeeded! 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.165006533Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=15.396073ms 13:57:46 policy-api | :: Spring Boot :: (v3.1.4) 13:57:46 policy-apex-pdp | bootstrap.servers = [kafka:9092] 13:57:46 mariadb | 13:57:46 prometheus | ts=2024-01-22T13:55:04.893Z caller=head.go:687 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.19µs 13:57:46 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 13:57:46 kafka | replica.lag.time.max.ms = 30000 13:57:46 simulator | 2024-01-22 13:55:08,567 INFO Using GSON for REST calls 13:57:46 policy-db-migrator | 321 blocks 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.169149981Z level=info msg="Executing migration" id="Create annotation_tag table v3" 13:57:46 policy-api | 13:57:46 policy-apex-pdp | check.crcs = true 13:57:46 mariadb | 2024-01-22 13:55:14+00:00 [Note] [Entrypoint]: Temporary server stopped 13:57:46 prometheus | ts=2024-01-22T13:55:04.893Z caller=head.go:695 level=info component=tsdb msg="Replaying WAL, this may take a while" 13:57:46 policy-pap | =========|_|==============|___/=/_/_/_/ 13:57:46 kafka | replica.selector.class = null 13:57:46 simulator | 2024-01-22 13:55:08,649 INFO Started o.e.j.s.ServletContextHandler@57fd91c9{/,null,AVAILABLE} 13:57:46 policy-db-migrator | Preparing upgrade release version: 0800 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.169739667Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=589.726µs 13:57:46 policy-api | [2024-01-22T13:55:24.406+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.9 with PID 22 (/app/api.jar started by policy in /opt/app/policy/api/bin) 13:57:46 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 13:57:46 mariadb | 13:57:46 prometheus | ts=2024-01-22T13:55:04.893Z caller=head.go:766 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 13:57:46 policy-pap | :: Spring Boot :: (v3.1.7) 13:57:46 kafka | replica.socket.receive.buffer.bytes = 65536 13:57:46 simulator | 2024-01-22 13:55:08,657 INFO Started A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 13:57:46 policy-db-migrator | Preparing upgrade release version: 0900 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.173031103Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 13:57:46 policy-api | [2024-01-22T13:55:24.408+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 13:57:46 policy-apex-pdp | client.id = consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-1 13:57:46 mariadb | 2024-01-22 13:55:14+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 13:57:46 prometheus | ts=2024-01-22T13:55:04.893Z caller=head.go:803 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=27.442µs wal_replay_duration=540.474µs wbl_replay_duration=180ns total_replay_duration=613.058µs 13:57:46 policy-pap | 13:57:46 kafka | replica.socket.timeout.ms = 30000 13:57:46 simulator | 2024-01-22 13:55:08,664 INFO Started Server@16746061{STARTING}[11.0.18,sto=0] @1783ms 13:57:46 policy-db-migrator | Preparing upgrade release version: 1000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.173991588Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=960.265µs 13:57:46 policy-api | [2024-01-22T13:55:26.223+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 13:57:46 policy-apex-pdp | client.rack = 13:57:46 mariadb | 13:57:46 prometheus | ts=2024-01-22T13:55:04.896Z caller=main.go:1060 level=info fs_type=EXT4_SUPER_MAGIC 13:57:46 policy-pap | [2024-01-22T13:55:38.094+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.9 with PID 34 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 13:57:46 kafka | replication.quota.window.num = 11 13:57:46 simulator | 2024-01-22 13:55:08,664 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,AVAILABLE}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4233 ms. 13:57:46 policy-db-migrator | Preparing upgrade release version: 1100 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.176787871Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 13:57:46 policy-api | [2024-01-22T13:55:26.320+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 80 ms. Found 6 JPA repository interfaces. 13:57:46 policy-apex-pdp | connections.max.idle.ms = 540000 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 13:57:46 prometheus | ts=2024-01-22T13:55:04.896Z caller=main.go:1063 level=info msg="TSDB started" 13:57:46 policy-pap | [2024-01-22T13:55:38.095+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 13:57:46 kafka | replication.quota.window.size.seconds = 1 13:57:46 simulator | 2024-01-22 13:55:08,669 INFO org.onap.policy.models.simulators starting SDNC simulator 13:57:46 policy-db-migrator | Preparing upgrade release version: 1200 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.177143701Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=351.4µs 13:57:46 policy-api | [2024-01-22T13:55:26.719+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 13:57:46 policy-apex-pdp | default.api.timeout.ms = 60000 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 13:57:46 prometheus | ts=2024-01-22T13:55:04.896Z caller=main.go:1245 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 13:57:46 policy-pap | [2024-01-22T13:55:40.118+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 13:57:46 kafka | request.timeout.ms = 30000 13:57:46 simulator | 2024-01-22 13:55:08,672 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 13:57:46 policy-db-migrator | Preparing upgrade release version: 1300 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.180814087Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 13:57:46 policy-api | [2024-01-22T13:55:26.720+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 13:57:46 policy-apex-pdp | enable.auto.commit = true 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Number of transaction pools: 1 13:57:46 prometheus | ts=2024-01-22T13:55:04.898Z caller=main.go:1282 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.712397ms db_storage=2.24µs remote_storage=2.941µs web_handler=890ns query_engine=1.81µs scrape=321.234µs scrape_sd=162.528µs notify=38.632µs notify_sd=15.5µs rules=3.42µs tracing=10.51µs 13:57:46 policy-pap | [2024-01-22T13:55:40.249+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 119 ms. Found 7 JPA repository interfaces. 13:57:46 kafka | reserved.broker.max.id = 1000 13:57:46 simulator | 2024-01-22 13:55:08,672 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:57:46 policy-db-migrator | Done 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.181439773Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=624.886µs 13:57:46 policy-api | [2024-01-22T13:55:27.411+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 13:57:46 policy-apex-pdp | exclude.internal.topics = true 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 13:57:46 prometheus | ts=2024-01-22T13:55:04.898Z caller=main.go:1024 level=info msg="Server is ready to receive web requests." 13:57:46 policy-pap | [2024-01-22T13:55:40.696+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 13:57:46 kafka | sasl.client.callback.handler.class = null 13:57:46 simulator | 2024-01-22 13:55:08,673 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:57:46 policy-db-migrator | name version 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.184521604Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 13:57:46 policy-api | [2024-01-22T13:55:27.420+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 13:57:46 policy-apex-pdp | fetch.max.bytes = 52428800 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 13:57:46 prometheus | ts=2024-01-22T13:55:04.898Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." 13:57:46 policy-pap | [2024-01-22T13:55:40.696+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 13:57:46 kafka | sasl.enabled.mechanisms = [GSSAPI] 13:57:46 simulator | 2024-01-22 13:55:08,675 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 13:57:46 policy-db-migrator | policyadmin 0 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.184777311Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=255.557µs 13:57:46 policy-api | [2024-01-22T13:55:27.422+00:00|INFO|StandardService|main] Starting service [Tomcat] 13:57:46 policy-apex-pdp | fetch.max.wait.ms = 500 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 13:57:46 policy-pap | [2024-01-22T13:55:41.444+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 13:57:46 kafka | sasl.jaas.config = null 13:57:46 simulator | 2024-01-22 13:55:08,680 INFO Session workerName=node0 13:57:46 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.187734979Z level=info msg="Executing migration" id="Add created time to annotation table" 13:57:46 policy-api | [2024-01-22T13:55:27.422+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] 13:57:46 policy-apex-pdp | fetch.min.bytes = 1 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 13:57:46 policy-pap | [2024-01-22T13:55:41.455+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 13:57:46 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:57:46 simulator | 2024-01-22 13:55:08,836 INFO Using GSON for REST calls 13:57:46 policy-db-migrator | upgrade: 0 -> 1300 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.191902878Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.167569ms 13:57:46 policy-api | [2024-01-22T13:55:27.513+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 13:57:46 policy-apex-pdp | group.id = e65163a7-0954-4bf8-9924-8c41fa40f9af 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 13:57:46 policy-pap | [2024-01-22T13:55:41.458+00:00|INFO|StandardService|main] Starting service [Tomcat] 13:57:46 kafka | sasl.kerberos.min.time.before.relogin = 60000 13:57:46 simulator | 2024-01-22 13:55:08,852 INFO Started o.e.j.s.ServletContextHandler@183e8023{/,null,AVAILABLE} 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.196073327Z level=info msg="Executing migration" id="Add updated time to annotation table" 13:57:46 policy-api | [2024-01-22T13:55:27.513+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3028 ms 13:57:46 policy-apex-pdp | group.instance.id = null 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Completed initialization of buffer pool 13:57:46 policy-pap | [2024-01-22T13:55:41.458+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 13:57:46 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 13:57:46 simulator | 2024-01-22 13:55:08,853 INFO Started SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 13:57:46 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.200694498Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.620091ms 13:57:46 policy-api | [2024-01-22T13:55:28.106+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 13:57:46 policy-apex-pdp | heartbeat.interval.ms = 3000 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 13:57:46 policy-pap | [2024-01-22T13:55:41.554+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 13:57:46 kafka | sasl.kerberos.service.name = null 13:57:46 simulator | 2024-01-22 13:55:08,853 INFO Started Server@75459c75{STARTING}[11.0.18,sto=0] @1972ms 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.203886812Z level=info msg="Executing migration" id="Add index for created in annotation table" 13:57:46 policy-api | [2024-01-22T13:55:28.181+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 13:57:46 policy-apex-pdp | interceptor.classes = [] 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: 128 rollback segments are active. 13:57:46 policy-pap | [2024-01-22T13:55:41.554+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3373 ms 13:57:46 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 13:57:46 simulator | 2024-01-22 13:55:08,853 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,AVAILABLE}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4820 ms. 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.205094264Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.207142ms 13:57:46 policy-api | [2024-01-22T13:55:28.184+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 13:57:46 policy-apex-pdp | internal.leave.group.on.close = true 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 13:57:46 policy-pap | [2024-01-22T13:55:42.014+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 13:57:46 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 13:57:46 simulator | 2024-01-22 13:55:08,855 INFO org.onap.policy.models.simulators starting SO simulator 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.208426471Z level=info msg="Executing migration" id="Add index for updated in annotation table" 13:57:46 policy-api | [2024-01-22T13:55:28.235+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 13:57:46 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 13:57:46 policy-pap | [2024-01-22T13:55:42.102+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 13:57:46 kafka | sasl.login.callback.handler.class = null 13:57:46 simulator | 2024-01-22 13:55:08,859 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 13:57:46 policy-api | [2024-01-22T13:55:28.596+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 13:57:46 policy-apex-pdp | isolation.level = read_uncommitted 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: log sequence number 347209; transaction id 299 13:57:46 policy-pap | [2024-01-22T13:55:42.105+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 13:57:46 kafka | sasl.login.class = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.209113459Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=686.938µs 13:57:46 simulator | 2024-01-22 13:55:08,860 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:57:46 policy-api | [2024-01-22T13:55:28.617+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 13:57:46 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] Plugin 'FEEDBACK' is disabled. 13:57:46 policy-pap | [2024-01-22T13:55:42.147+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 13:57:46 kafka | sasl.login.connect.timeout.ms = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.213486744Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 13:57:46 simulator | 2024-01-22 13:55:08,863 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:57:46 policy-apex-pdp | max.partition.fetch.bytes = 1048576 13:57:46 policy-pap | [2024-01-22T13:55:42.488+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 13:57:46 kafka | sasl.login.read.timeout.ms = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.213799322Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=305.328µs 13:57:46 policy-db-migrator | -------------- 13:57:46 simulator | 2024-01-22 13:55:08,864 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 13:57:46 policy-api | [2024-01-22T13:55:28.716+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2620e717 13:57:46 policy-api | [2024-01-22T13:55:28.718+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 13:57:46 policy-pap | [2024-01-22T13:55:42.507+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 13:57:46 kafka | sasl.login.refresh.buffer.seconds = 300 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.21602079Z level=info msg="Executing migration" id="Add epoch_end column" 13:57:46 policy-db-migrator | 13:57:46 simulator | 2024-01-22 13:55:08,868 INFO Session workerName=node0 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 13:57:46 policy-api | [2024-01-22T13:55:28.749+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) 13:57:46 policy-pap | [2024-01-22T13:55:42.638+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@288ca5f0 13:57:46 kafka | sasl.login.refresh.min.period.seconds = 60 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.220789035Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.766215ms 13:57:46 policy-db-migrator | 13:57:46 simulator | 2024-01-22 13:55:08,941 INFO Using GSON for REST calls 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 13:57:46 policy-apex-pdp | max.poll.interval.ms = 300000 13:57:46 policy-api | [2024-01-22T13:55:28.751+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead 13:57:46 policy-pap | [2024-01-22T13:55:42.641+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 13:57:46 kafka | sasl.login.refresh.window.factor = 0.8 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.224133923Z level=info msg="Executing migration" id="Add index for epoch_end" 13:57:46 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 13:57:46 simulator | 2024-01-22 13:55:08,954 INFO Started o.e.j.s.ServletContextHandler@2a3c96e3{/,null,AVAILABLE} 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] Server socket created on IP: '0.0.0.0'. 13:57:46 policy-apex-pdp | max.poll.records = 500 13:57:46 policy-api | [2024-01-22T13:55:30.670+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 13:57:46 policy-pap | [2024-01-22T13:55:42.669+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) 13:57:46 kafka | sasl.login.refresh.window.jitter = 0.05 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.225280213Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.14565ms 13:57:46 policy-db-migrator | -------------- 13:57:46 simulator | 2024-01-22 13:55:08,955 INFO Started SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] Server socket created on IP: '::'. 13:57:46 policy-apex-pdp | metadata.max.age.ms = 300000 13:57:46 policy-api | [2024-01-22T13:55:30.674+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 13:57:46 policy-pap | [2024-01-22T13:55:42.670+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead 13:57:46 kafka | sasl.login.retry.backoff.max.ms = 10000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.22898971Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 13:57:46 simulator | 2024-01-22 13:55:08,955 INFO Started Server@30bcf3c1{STARTING}[11.0.18,sto=0] @2074ms 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] mariadbd: ready for connections. 13:57:46 policy-apex-pdp | metric.reporters = [] 13:57:46 policy-api | [2024-01-22T13:55:32.022+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 13:57:46 policy-pap | [2024-01-22T13:55:44.682+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 13:57:46 kafka | sasl.login.retry.backoff.ms = 100 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.229463873Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=477.563µs 13:57:46 policy-db-migrator | -------------- 13:57:46 simulator | 2024-01-22 13:55:08,955 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,AVAILABLE}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4908 ms. 13:57:46 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 13:57:46 policy-apex-pdp | metrics.num.samples = 2 13:57:46 policy-api | [2024-01-22T13:55:32.938+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 13:57:46 policy-pap | [2024-01-22T13:55:44.686+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 13:57:46 kafka | sasl.mechanism.controller.protocol = GSSAPI 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.234417983Z level=info msg="Executing migration" id="Move region to single row" 13:57:46 policy-db-migrator | 13:57:46 simulator | 2024-01-22 13:55:08,956 INFO org.onap.policy.models.simulators starting VFC simulator 13:57:46 mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Buffer pool(s) load completed at 240122 13:55:14 13:57:46 policy-apex-pdp | metrics.recording.level = INFO 13:57:46 policy-api | [2024-01-22T13:55:34.147+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 13:57:46 policy-pap | [2024-01-22T13:55:45.254+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 13:57:46 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.235306146Z level=info msg="Migration successfully executed" id="Move region to single row" duration=891.693µs 13:57:46 policy-db-migrator | 13:57:46 simulator | 2024-01-22 13:55:08,959 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 13:57:46 mariadb | 2024-01-22 13:55:15 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) 13:57:46 policy-apex-pdp | metrics.sample.window.ms = 30000 13:57:46 policy-api | [2024-01-22T13:55:34.358+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@58a01e47, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@6149184e, org.springframework.security.web.context.SecurityContextHolderFilter@234a08ea, org.springframework.security.web.header.HeaderWriterFilter@2e26841f, org.springframework.security.web.authentication.logout.LogoutFilter@c7a7d3, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@3413effc, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@56d3e4a9, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@2542d320, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@6f3a8d5e, org.springframework.security.web.access.ExceptionTranslationFilter@19bd1f98, org.springframework.security.web.access.intercept.AuthorizationFilter@729f8c5d] 13:57:46 policy-pap | [2024-01-22T13:55:45.876+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 13:57:46 kafka | sasl.oauthbearer.clock.skew.seconds = 30 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.239047694Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 13:57:46 simulator | 2024-01-22 13:55:08,959 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:57:46 mariadb | 2024-01-22 13:55:15 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) 13:57:46 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 13:57:46 policy-api | [2024-01-22T13:55:35.184+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 13:57:46 policy-pap | [2024-01-22T13:55:45.981+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 13:57:46 kafka | sasl.oauthbearer.expected.audience = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.240646596Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.601132ms 13:57:46 simulator | 2024-01-22 13:55:08,960 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:57:46 mariadb | 2024-01-22 13:55:15 21 [Warning] Aborted connection 21 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 13:57:46 policy-apex-pdp | receive.buffer.bytes = 65536 13:57:46 policy-api | [2024-01-22T13:55:35.237+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 13:57:46 policy-pap | [2024-01-22T13:55:46.268+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 13:57:46 kafka | sasl.oauthbearer.expected.issuer = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.244339503Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 13:57:46 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 13:57:46 simulator | 2024-01-22 13:55:08,961 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 13:57:46 mariadb | 2024-01-22 13:55:15 25 [Warning] Aborted connection 25 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 13:57:46 policy-apex-pdp | reconnect.backoff.max.ms = 1000 13:57:46 policy-api | [2024-01-22T13:55:35.274+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 13:57:46 policy-pap | allow.auto.create.topics = true 13:57:46 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.245477143Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.13852ms 13:57:46 policy-db-migrator | -------------- 13:57:46 simulator | 2024-01-22 13:55:08,974 INFO Session workerName=node0 13:57:46 policy-apex-pdp | reconnect.backoff.ms = 50 13:57:46 policy-api | [2024-01-22T13:55:35.292+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.617 seconds (process running for 12.242) 13:57:46 policy-pap | auto.commit.interval.ms = 5000 13:57:46 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.249483188Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 13:57:46 simulator | 2024-01-22 13:55:09,075 INFO Using GSON for REST calls 13:57:46 policy-apex-pdp | request.timeout.ms = 30000 13:57:46 policy-api | [2024-01-22T13:55:39.938+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 13:57:46 policy-pap | auto.include.jmx.reporter = true 13:57:46 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.250576047Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.092429ms 13:57:46 policy-db-migrator | -------------- 13:57:46 simulator | 2024-01-22 13:55:09,086 INFO Started o.e.j.s.ServletContextHandler@792bbc74{/,null,AVAILABLE} 13:57:46 policy-apex-pdp | retry.backoff.ms = 100 13:57:46 policy-api | [2024-01-22T13:55:39.938+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 13:57:46 policy-pap | auto.offset.reset = latest 13:57:46 kafka | sasl.oauthbearer.jwks.endpoint.url = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.253533404Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 13:57:46 policy-db-migrator | 13:57:46 simulator | 2024-01-22 13:55:09,091 INFO Started VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 13:57:46 policy-apex-pdp | sasl.client.callback.handler.class = null 13:57:46 policy-api | [2024-01-22T13:55:39.940+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms 13:57:46 policy-pap | bootstrap.servers = [kafka:9092] 13:57:46 kafka | sasl.oauthbearer.scope.claim.name = scope 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.254481369Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=947.385µs 13:57:46 policy-db-migrator | 13:57:46 simulator | 2024-01-22 13:55:09,092 INFO Started Server@a776e{STARTING}[11.0.18,sto=0] @2211ms 13:57:46 policy-apex-pdp | sasl.jaas.config = null 13:57:46 policy-api | [2024-01-22T13:55:53.303+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-4] ***** OrderedServiceImpl implementers: 13:57:46 policy-pap | check.crcs = true 13:57:46 kafka | sasl.oauthbearer.sub.claim.name = sub 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.257724124Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 13:57:46 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 13:57:46 simulator | 2024-01-22 13:55:09,093 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,AVAILABLE}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4867 ms. 13:57:46 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:57:46 policy-api | [] 13:57:46 policy-pap | client.dns.lookup = use_all_dns_ips 13:57:46 kafka | sasl.oauthbearer.token.endpoint.url = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.258636518Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=912.154µs 13:57:46 policy-db-migrator | -------------- 13:57:46 simulator | 2024-01-22 13:55:09,096 INFO org.onap.policy.models.simulators started 13:57:46 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 13:57:46 policy-pap | client.id = consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-1 13:57:46 kafka | sasl.server.callback.handler.class = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.262373276Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 13:57:46 policy-apex-pdp | sasl.kerberos.service.name = null 13:57:46 policy-pap | client.rack = 13:57:46 kafka | sasl.server.max.receive.size = 524288 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.263330201Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=953.615µs 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 13:57:46 policy-pap | connections.max.idle.ms = 540000 13:57:46 kafka | security.inter.broker.protocol = PLAINTEXT 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.2667247Z level=info msg="Executing migration" id="Increase tags column to length 4096" 13:57:46 policy-db-migrator | 13:57:46 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 13:57:46 policy-apex-pdp | sasl.login.callback.handler.class = null 13:57:46 kafka | security.providers = null 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.266873564Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=148.184µs 13:57:46 policy-apex-pdp | sasl.login.class = null 13:57:46 policy-apex-pdp | sasl.login.connect.timeout.ms = null 13:57:46 kafka | server.max.startup.time.ms = 9223372036854775807 13:57:46 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.270248293Z level=info msg="Executing migration" id="create test_data table" 13:57:46 policy-apex-pdp | sasl.login.read.timeout.ms = null 13:57:46 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 13:57:46 kafka | socket.connection.setup.timeout.max.ms = 30000 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.271210898Z level=info msg="Migration successfully executed" id="create test_data table" duration=962.176µs 13:57:46 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 13:57:46 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 13:57:46 kafka | socket.connection.setup.timeout.ms = 10000 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.275257004Z level=info msg="Executing migration" id="create dashboard_version table v1" 13:57:46 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 13:57:46 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 13:57:46 kafka | socket.listen.backlog.size = 50 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.276059655Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=802.511µs 13:57:46 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 13:57:46 policy-apex-pdp | sasl.mechanism = GSSAPI 13:57:46 kafka | socket.receive.buffer.bytes = 102400 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.28845451Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 13:57:46 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 13:57:46 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 13:57:46 kafka | socket.request.max.bytes = 104857600 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.289533708Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.080298ms 13:57:46 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 13:57:46 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:57:46 kafka | socket.send.buffer.bytes = 102400 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.292350632Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 13:57:46 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:57:46 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:57:46 kafka | ssl.cipher.suites = [] 13:57:46 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.293352088Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.001516ms 13:57:46 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 13:57:46 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 13:57:46 kafka | ssl.client.auth = none 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.297769474Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 13:57:46 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 13:57:46 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 13:57:46 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.298334199Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=565.685µs 13:57:46 policy-apex-pdp | security.protocol = PLAINTEXT 13:57:46 policy-apex-pdp | security.providers = null 13:57:46 kafka | ssl.endpoint.identification.algorithm = https 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.301629215Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 13:57:46 policy-apex-pdp | send.buffer.bytes = 131072 13:57:46 policy-apex-pdp | session.timeout.ms = 45000 13:57:46 kafka | ssl.engine.factory.class = null 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.302399746Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=773.041µs 13:57:46 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 13:57:46 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 13:57:46 kafka | ssl.key.password = null 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.305895277Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 13:57:46 policy-apex-pdp | ssl.cipher.suites = null 13:57:46 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:57:46 kafka | ssl.keymanager.algorithm = SunX509 13:57:46 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.306171455Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=275.977µs 13:57:46 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 13:57:46 policy-apex-pdp | ssl.engine.factory.class = null 13:57:46 kafka | ssl.keystore.certificate.chain = null 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.309566214Z level=info msg="Executing migration" id="create team table" 13:57:46 policy-apex-pdp | ssl.key.password = null 13:57:46 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 13:57:46 kafka | ssl.keystore.key = null 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.310441087Z level=info msg="Migration successfully executed" id="create team table" duration=874.992µs 13:57:46 policy-apex-pdp | ssl.keystore.certificate.chain = null 13:57:46 policy-apex-pdp | ssl.keystore.key = null 13:57:46 kafka | ssl.keystore.location = null 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.314199325Z level=info msg="Executing migration" id="add index team.org_id" 13:57:46 policy-apex-pdp | ssl.keystore.location = null 13:57:46 policy-apex-pdp | ssl.keystore.password = null 13:57:46 kafka | ssl.keystore.password = null 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.315276293Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.080838ms 13:57:46 policy-apex-pdp | ssl.keystore.type = JKS 13:57:46 policy-apex-pdp | ssl.protocol = TLSv1.3 13:57:46 kafka | ssl.keystore.type = JKS 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.318006825Z level=info msg="Executing migration" id="add unique index team_org_id_name" 13:57:46 policy-apex-pdp | ssl.provider = null 13:57:46 policy-pap | default.api.timeout.ms = 60000 13:57:46 kafka | ssl.principal.mapping.rules = DEFAULT 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.319036862Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.029407ms 13:57:46 policy-pap | enable.auto.commit = true 13:57:46 policy-pap | exclude.internal.topics = true 13:57:46 kafka | ssl.protocol = TLSv1.3 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.322985076Z level=info msg="Executing migration" id="Add column uid in team" 13:57:46 policy-pap | fetch.max.bytes = 52428800 13:57:46 kafka | ssl.provider = null 13:57:46 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.327850603Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.865138ms 13:57:46 policy-apex-pdp | ssl.secure.random.implementation = null 13:57:46 policy-pap | fetch.max.wait.ms = 500 13:57:46 kafka | ssl.secure.random.implementation = null 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.331186871Z level=info msg="Executing migration" id="Update uid column values in team" 13:57:46 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 13:57:46 policy-pap | fetch.min.bytes = 1 13:57:46 kafka | ssl.trustmanager.algorithm = PKIX 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.331461218Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=273.557µs 13:57:46 policy-apex-pdp | ssl.truststore.certificates = null 13:57:46 policy-apex-pdp | ssl.truststore.location = null 13:57:46 kafka | ssl.truststore.certificates = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.334706983Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 13:57:46 policy-apex-pdp | ssl.truststore.password = null 13:57:46 policy-pap | group.id = 79c954dd-4645-472b-b928-ee2d4186f7c1 13:57:46 kafka | ssl.truststore.location = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.33574139Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.034087ms 13:57:46 policy-pap | group.instance.id = null 13:57:46 policy-pap | heartbeat.interval.ms = 3000 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | ssl.truststore.password = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.449474473Z level=info msg="Executing migration" id="create team member table" 13:57:46 policy-pap | interceptor.classes = [] 13:57:46 policy-pap | internal.leave.group.on.close = true 13:57:46 policy-db-migrator | 13:57:46 kafka | ssl.truststore.type = JKS 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.450348876Z level=info msg="Migration successfully executed" id="create team member table" duration=878.113µs 13:57:46 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 13:57:46 policy-pap | isolation.level = read_uncommitted 13:57:46 policy-db-migrator | 13:57:46 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.45469954Z level=info msg="Executing migration" id="add index team_member.org_id" 13:57:46 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:57:46 policy-pap | max.partition.fetch.bytes = 1048576 13:57:46 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 13:57:46 kafka | transaction.max.timeout.ms = 900000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.456360143Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.661293ms 13:57:46 policy-pap | max.poll.interval.ms = 300000 13:57:46 policy-pap | max.poll.records = 500 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.459541787Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 13:57:46 policy-pap | metadata.max.age.ms = 300000 13:57:46 policy-pap | metric.reporters = [] 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:57:46 kafka | transaction.state.log.load.buffer.size = 5242880 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.460544813Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.002896ms 13:57:46 policy-pap | metrics.num.samples = 2 13:57:46 policy-pap | metrics.recording.level = INFO 13:57:46 kafka | transaction.state.log.min.isr = 2 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.463612204Z level=info msg="Executing migration" id="add index team_member.team_id" 13:57:46 policy-pap | metrics.sample.window.ms = 30000 13:57:46 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | transaction.state.log.num.partitions = 50 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.464589759Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=977.696µs 13:57:46 policy-pap | receive.buffer.bytes = 65536 13:57:46 policy-pap | reconnect.backoff.max.ms = 1000 13:57:46 policy-db-migrator | 13:57:46 kafka | transaction.state.log.replication.factor = 3 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.468360408Z level=info msg="Executing migration" id="Add column email to team table" 13:57:46 policy-pap | reconnect.backoff.ms = 50 13:57:46 policy-pap | request.timeout.ms = 30000 13:57:46 policy-db-migrator | 13:57:46 kafka | transaction.state.log.segment.bytes = 104857600 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.473163264Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.802856ms 13:57:46 policy-pap | retry.backoff.ms = 100 13:57:46 policy-pap | sasl.client.callback.handler.class = null 13:57:46 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 13:57:46 kafka | transactional.id.expiration.ms = 604800000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.476536152Z level=info msg="Executing migration" id="Add column external to team_member table" 13:57:46 policy-pap | sasl.jaas.config = null 13:57:46 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | unclean.leader.election.enable = false 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.481259196Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.722364ms 13:57:46 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 13:57:46 policy-pap | sasl.kerberos.service.name = null 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:57:46 kafka | unstable.api.versions.enable = false 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.484023739Z level=info msg="Executing migration" id="Add column permission to team_member table" 13:57:46 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 13:57:46 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | zookeeper.clientCnxnSocket = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.488578238Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.553679ms 13:57:46 policy-pap | sasl.login.callback.handler.class = null 13:57:46 policy-pap | sasl.login.class = null 13:57:46 policy-db-migrator | 13:57:46 kafka | zookeeper.connect = zookeeper:2181 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.492529092Z level=info msg="Executing migration" id="create dashboard acl table" 13:57:46 policy-pap | sasl.login.connect.timeout.ms = null 13:57:46 kafka | zookeeper.connection.timeout.ms = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.493431516Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=902.334µs 13:57:46 policy-apex-pdp | ssl.truststore.type = JKS 13:57:46 kafka | zookeeper.max.in.flight.requests = 10 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.496341471Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 13:57:46 policy-pap | sasl.login.read.timeout.ms = null 13:57:46 policy-db-migrator | 13:57:46 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:57:46 kafka | zookeeper.metadata.migration.enable = false 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.497321197Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=979.366µs 13:57:46 policy-pap | sasl.login.refresh.buffer.seconds = 300 13:57:46 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 13:57:46 policy-apex-pdp | 13:57:46 kafka | zookeeper.session.timeout.ms = 18000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.500198832Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 13:57:46 policy-pap | sasl.login.refresh.min.period.seconds = 60 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.685+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 13:57:46 kafka | zookeeper.set.acl = false 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.50127534Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.075998ms 13:57:46 policy-pap | sasl.login.refresh.window.factor = 0.8 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.685+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 13:57:46 kafka | zookeeper.ssl.cipher.suites = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.505007688Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 13:57:46 policy-pap | sasl.login.refresh.window.jitter = 0.05 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.685+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705931749683 13:57:46 policy-pap | sasl.login.retry.backoff.max.ms = 10000 13:57:46 policy-pap | sasl.login.retry.backoff.ms = 100 13:57:46 kafka | zookeeper.ssl.client.enable = false 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.505999024Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=990.816µs 13:57:46 policy-db-migrator | 13:57:46 policy-pap | sasl.mechanism = GSSAPI 13:57:46 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 13:57:46 kafka | zookeeper.ssl.crl.enable = false 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.511038386Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 13:57:46 policy-db-migrator | 13:57:46 policy-pap | sasl.oauthbearer.expected.audience = null 13:57:46 policy-pap | sasl.oauthbearer.expected.issuer = null 13:57:46 kafka | zookeeper.ssl.enabled.protocols = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.512018392Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=979.816µs 13:57:46 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 13:57:46 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:57:46 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:57:46 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.51575356Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:57:46 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 13:57:46 kafka | zookeeper.ssl.keystore.location = null 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.516723555Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=969.985µs 13:57:46 policy-pap | sasl.oauthbearer.scope.claim.name = scope 13:57:46 policy-pap | sasl.oauthbearer.sub.claim.name = sub 13:57:46 kafka | zookeeper.ssl.keystore.password = null 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.519693033Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 13:57:46 policy-pap | sasl.oauthbearer.token.endpoint.url = null 13:57:46 policy-pap | security.protocol = PLAINTEXT 13:57:46 policy-db-migrator | 13:57:46 kafka | zookeeper.ssl.keystore.type = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.520686269Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=992.026µs 13:57:46 policy-pap | security.providers = null 13:57:46 policy-pap | send.buffer.bytes = 131072 13:57:46 policy-db-migrator | 13:57:46 kafka | zookeeper.ssl.ocsp.enable = false 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.5245162Z level=info msg="Executing migration" id="add index dashboard_permission" 13:57:46 policy-pap | session.timeout.ms = 45000 13:57:46 policy-pap | socket.connection.setup.timeout.max.ms = 30000 13:57:46 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 13:57:46 kafka | zookeeper.ssl.protocol = TLSv1.2 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.525705261Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.189441ms 13:57:46 policy-pap | socket.connection.setup.timeout.ms = 10000 13:57:46 policy-pap | ssl.cipher.suites = null 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | zookeeper.ssl.truststore.location = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.528630828Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 13:57:46 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:57:46 policy-pap | ssl.endpoint.identification.algorithm = https 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:57:46 kafka | zookeeper.ssl.truststore.password = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.529303235Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=671.907µs 13:57:46 policy-pap | ssl.engine.factory.class = null 13:57:46 policy-pap | ssl.key.password = null 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | zookeeper.ssl.truststore.type = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.533106515Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 13:57:46 policy-pap | ssl.keymanager.algorithm = SunX509 13:57:46 policy-pap | ssl.keystore.certificate.chain = null 13:57:46 policy-db-migrator | 13:57:46 kafka | (kafka.server.KafkaConfig) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.533402383Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=295.488µs 13:57:46 policy-pap | ssl.keystore.key = null 13:57:46 policy-pap | ssl.keystore.location = null 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:17,869] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.536747971Z level=info msg="Executing migration" id="create tag table" 13:57:46 policy-pap | ssl.keystore.password = null 13:57:46 policy-pap | ssl.keystore.type = JKS 13:57:46 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 13:57:46 kafka | [2024-01-22 13:55:17,870] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.537980943Z level=info msg="Migration successfully executed" id="create tag table" duration=1.229613ms 13:57:46 policy-pap | ssl.protocol = TLSv1.3 13:57:46 policy-pap | ssl.provider = null 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:17,872] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.54245833Z level=info msg="Executing migration" id="add index tag.key_value" 13:57:46 policy-pap | ssl.secure.random.implementation = null 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.688+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-1, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Subscribed to topic(s): policy-pdp-pap 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:57:46 kafka | [2024-01-22 13:55:17,877] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.543892108Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.434618ms 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.703+00:00|INFO|ServiceManager|main] service manager starting 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.704+00:00|INFO|ServiceManager|main] service manager starting topics 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:17,912] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.547332128Z level=info msg="Executing migration" id="create login attempt table" 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.711+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=e65163a7-0954-4bf8-9924-8c41fa40f9af, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.736+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:17,919] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.548091628Z level=info msg="Migration successfully executed" id="create login attempt table" duration=759.26µs 13:57:46 policy-apex-pdp | allow.auto.create.topics = true 13:57:46 policy-apex-pdp | auto.commit.interval.ms = 5000 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:17,929] INFO Loaded 0 logs in 17ms (kafka.log.LogManager) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.551878357Z level=info msg="Executing migration" id="add index login_attempt.username" 13:57:46 policy-apex-pdp | auto.include.jmx.reporter = true 13:57:46 policy-apex-pdp | auto.offset.reset = latest 13:57:46 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 13:57:46 kafka | [2024-01-22 13:55:17,931] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.552874273Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=998.696µs 13:57:46 policy-apex-pdp | bootstrap.servers = [kafka:9092] 13:57:46 policy-apex-pdp | check.crcs = true 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:17,934] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.557171266Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 13:57:46 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 13:57:46 policy-apex-pdp | client.id = consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:57:46 kafka | [2024-01-22 13:55:17,955] INFO Starting the log cleaner (kafka.log.LogCleaner) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.558154612Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=983.176µs 13:57:46 policy-apex-pdp | client.rack = 13:57:46 policy-pap | ssl.trustmanager.algorithm = PKIX 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:18,000] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.562701001Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 13:57:46 policy-apex-pdp | connections.max.idle.ms = 540000 13:57:46 policy-pap | ssl.truststore.certificates = null 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:18,016] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.579975524Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=17.270263ms 13:57:46 policy-apex-pdp | default.api.timeout.ms = 60000 13:57:46 policy-pap | ssl.truststore.location = null 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:18,029] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.58363585Z level=info msg="Executing migration" id="create login_attempt v2" 13:57:46 policy-apex-pdp | enable.auto.commit = true 13:57:46 policy-pap | ssl.truststore.password = null 13:57:46 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 13:57:46 kafka | [2024-01-22 13:55:18,086] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.584276947Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=641.257µs 13:57:46 policy-apex-pdp | exclude.internal.topics = true 13:57:46 policy-pap | ssl.truststore.type = JKS 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:18,423] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.588168229Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 13:57:46 policy-apex-pdp | fetch.max.bytes = 52428800 13:57:46 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:57:46 kafka | [2024-01-22 13:55:18,446] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.589303769Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.13171ms 13:57:46 policy-apex-pdp | fetch.max.wait.ms = 500 13:57:46 policy-pap | 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:18,447] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.593210011Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 13:57:46 policy-apex-pdp | fetch.min.bytes = 1 13:57:46 policy-pap | [2024-01-22T13:55:46.446+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:18,453] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.593808157Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=597.596µs 13:57:46 policy-apex-pdp | group.id = e65163a7-0954-4bf8-9924-8c41fa40f9af 13:57:46 policy-pap | [2024-01-22T13:55:46.446+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:18,457] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.597557215Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 13:57:46 policy-apex-pdp | group.instance.id = null 13:57:46 policy-pap | [2024-01-22T13:55:46.446+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705931746444 13:57:46 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 13:57:46 kafka | [2024-01-22 13:55:18,476] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.598295525Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=737.64µs 13:57:46 policy-apex-pdp | heartbeat.interval.ms = 3000 13:57:46 policy-pap | [2024-01-22T13:55:46.449+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-1, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Subscribed to topic(s): policy-pdp-pap 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:18,480] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.601993242Z level=info msg="Executing migration" id="create user auth table" 13:57:46 policy-apex-pdp | interceptor.classes = [] 13:57:46 policy-pap | [2024-01-22T13:55:46.449+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:57:46 kafka | [2024-01-22 13:55:18,479] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.602802223Z level=info msg="Migration successfully executed" id="create user auth table" duration=807.751µs 13:57:46 policy-apex-pdp | internal.leave.group.on.close = true 13:57:46 policy-pap | allow.auto.create.topics = true 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:18,481] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.605994727Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 13:57:46 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 13:57:46 policy-pap | auto.commit.interval.ms = 5000 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:18,496] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.607052284Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.057257ms 13:57:46 policy-apex-pdp | isolation.level = read_uncommitted 13:57:46 policy-pap | auto.include.jmx.reporter = true 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:18,524] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.610176866Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 13:57:46 policy-pap | auto.offset.reset = latest 13:57:46 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:57:46 kafka | [2024-01-22 13:55:18,598] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1705931718568,1705931718568,1,0,0,72057614900985857,258,0,27 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.61031808Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=140.784µs 13:57:46 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 13:57:46 policy-pap | bootstrap.servers = [kafka:9092] 13:57:46 policy-apex-pdp | max.partition.fetch.bytes = 1048576 13:57:46 kafka | (kafka.zk.KafkaZkClient) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.614822258Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | check.crcs = true 13:57:46 kafka | [2024-01-22 13:55:18,599] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.619998374Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.175406ms 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:57:46 policy-pap | client.dns.lookup = use_all_dns_ips 13:57:46 policy-apex-pdp | max.poll.interval.ms = 300000 13:57:46 kafka | [2024-01-22 13:55:18,860] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.623204028Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | client.id = consumer-policy-pap-2 13:57:46 policy-apex-pdp | max.poll.records = 500 13:57:46 kafka | [2024-01-22 13:55:18,868] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.628292491Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.087583ms 13:57:46 policy-db-migrator | 13:57:46 policy-pap | client.rack = 13:57:46 policy-apex-pdp | metadata.max.age.ms = 300000 13:57:46 kafka | [2024-01-22 13:55:18,875] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.631823354Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 13:57:46 policy-db-migrator | 13:57:46 policy-pap | connections.max.idle.ms = 540000 13:57:46 policy-apex-pdp | metric.reporters = [] 13:57:46 kafka | [2024-01-22 13:55:18,875] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.636919418Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.096124ms 13:57:46 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 13:57:46 policy-pap | default.api.timeout.ms = 60000 13:57:46 policy-apex-pdp | metrics.num.samples = 2 13:57:46 kafka | [2024-01-22 13:55:18,898] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.641342314Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | enable.auto.commit = true 13:57:46 policy-apex-pdp | metrics.recording.level = INFO 13:57:46 kafka | [2024-01-22 13:55:18,906] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.646410576Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.067712ms 13:57:46 policy-pap | exclude.internal.topics = true 13:57:46 policy-apex-pdp | metrics.sample.window.ms = 30000 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:57:46 kafka | [2024-01-22 13:55:18,911] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.650243757Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 13:57:46 policy-pap | fetch.max.bytes = 52428800 13:57:46 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:18,913] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.651177832Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=934.114µs 13:57:46 policy-pap | fetch.max.wait.ms = 500 13:57:46 policy-apex-pdp | receive.buffer.bytes = 65536 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:18,915] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.654998102Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 13:57:46 policy-pap | fetch.min.bytes = 1 13:57:46 policy-apex-pdp | reconnect.backoff.max.ms = 1000 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:18,932] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.66408046Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=9.078188ms 13:57:46 policy-apex-pdp | reconnect.backoff.ms = 50 13:57:46 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 13:57:46 kafka | [2024-01-22 13:55:19,015] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.669510602Z level=info msg="Executing migration" id="create server_lock table" 13:57:46 policy-pap | group.id = policy-pap 13:57:46 policy-apex-pdp | request.timeout.ms = 30000 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:19,017] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.670388345Z level=info msg="Migration successfully executed" id="create server_lock table" duration=877.273µs 13:57:46 policy-pap | group.instance.id = null 13:57:46 policy-apex-pdp | retry.backoff.ms = 100 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:57:46 kafka | [2024-01-22 13:55:19,018] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 13:57:46 kafka | [2024-01-22 13:55:19,055] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) 13:57:46 policy-pap | heartbeat.interval.ms = 3000 13:57:46 policy-apex-pdp | sasl.client.callback.handler.class = null 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:19,057] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.673825855Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 13:57:46 policy-apex-pdp | sasl.jaas.config = null 13:57:46 policy-db-migrator | 13:57:46 policy-pap | interceptor.classes = [] 13:57:46 kafka | [2024-01-22 13:55:19,062] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.674811231Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=985.996µs 13:57:46 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:57:46 policy-db-migrator | 13:57:46 policy-pap | internal.leave.group.on.close = true 13:57:46 kafka | [2024-01-22 13:55:19,073] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.678365195Z level=info msg="Executing migration" id="create user auth token table" 13:57:46 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 13:57:46 policy-apex-pdp | sasl.kerberos.service.name = null 13:57:46 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 13:57:46 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 13:57:46 kafka | [2024-01-22 13:55:19,077] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 13:57:46 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.679226377Z level=info msg="Migration successfully executed" id="create user auth token table" duration=860.453µs 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | isolation.level = read_uncommitted 13:57:46 kafka | [2024-01-22 13:55:19,081] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 13:57:46 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.682795921Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 13:57:46 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:57:46 kafka | [2024-01-22 13:55:19,091] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 13:57:46 policy-apex-pdp | sasl.login.callback.handler.class = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.68391961Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.124029ms 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | max.partition.fetch.bytes = 1048576 13:57:46 kafka | [2024-01-22 13:55:19,100] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 13:57:46 policy-apex-pdp | sasl.login.class = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.688469299Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 13:57:46 policy-db-migrator | 13:57:46 policy-pap | max.poll.interval.ms = 300000 13:57:46 kafka | [2024-01-22 13:55:19,104] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 13:57:46 policy-apex-pdp | sasl.login.connect.timeout.ms = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.689512637Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.043428ms 13:57:46 policy-db-migrator | 13:57:46 policy-pap | max.poll.records = 500 13:57:46 kafka | [2024-01-22 13:55:19,109] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 13:57:46 policy-apex-pdp | sasl.login.read.timeout.ms = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.693284766Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 13:57:46 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 13:57:46 policy-pap | metadata.max.age.ms = 300000 13:57:46 kafka | [2024-01-22 13:55:19,119] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 13:57:46 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.694334043Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.049797ms 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | metric.reporters = [] 13:57:46 kafka | [2024-01-22 13:55:19,123] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 13:57:46 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.699444487Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:57:46 policy-pap | metrics.num.samples = 2 13:57:46 kafka | [2024-01-22 13:55:19,123] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 13:57:46 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.705188438Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.743481ms 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | metrics.recording.level = INFO 13:57:46 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.708109075Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 13:57:46 kafka | [2024-01-22 13:55:19,125] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | metrics.sample.window.ms = 30000 13:57:46 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.709202843Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.094589ms 13:57:46 kafka | [2024-01-22 13:55:19,125] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 13:57:46 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.712045938Z level=info msg="Executing migration" id="create cache_data table" 13:57:46 kafka | [2024-01-22 13:55:19,125] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 13:57:46 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 13:57:46 policy-pap | receive.buffer.bytes = 65536 13:57:46 policy-apex-pdp | sasl.mechanism = GSSAPI 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.71290679Z level=info msg="Migration successfully executed" id="create cache_data table" duration=860.292µs 13:57:46 kafka | [2024-01-22 13:55:19,125] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | reconnect.backoff.max.ms = 1000 13:57:46 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.716957247Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 13:57:46 kafka | [2024-01-22 13:55:19,126] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:57:46 policy-pap | reconnect.backoff.ms = 50 13:57:46 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.717927412Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=970.015µs 13:57:46 kafka | [2024-01-22 13:55:19,132] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | request.timeout.ms = 30000 13:57:46 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.721361772Z level=info msg="Executing migration" id="create short_url table v1" 13:57:46 kafka | [2024-01-22 13:55:19,132] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | retry.backoff.ms = 100 13:57:46 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.722234545Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=870.123µs 13:57:46 kafka | [2024-01-22 13:55:19,132] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | sasl.client.callback.handler.class = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.726138387Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 13:57:46 kafka | [2024-01-22 13:55:19,133] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 13:57:46 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 13:57:46 policy-pap | sasl.jaas.config = null 13:57:46 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.727141624Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.002277ms 13:57:46 kafka | [2024-01-22 13:55:19,134] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:57:46 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.731302173Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 13:57:46 kafka | [2024-01-22 13:55:19,136] INFO Kafka version: 7.5.3-ccs (org.apache.kafka.common.utils.AppInfoParser) 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 13:57:46 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 13:57:46 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.731442136Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=139.263µs 13:57:46 kafka | [2024-01-22 13:55:19,136] INFO Kafka commitId: 9090b26369455a2f335fbb5487fb89675ee406ab (org.apache.kafka.common.utils.AppInfoParser) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | sasl.kerberos.service.name = null 13:57:46 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.734863836Z level=info msg="Executing migration" id="delete alert_definition table" 13:57:46 kafka | [2024-01-22 13:55:19,136] INFO Kafka startTimeMs: 1705931719131 (org.apache.kafka.common.utils.AppInfoParser) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 13:57:46 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.735109853Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=245.717µs 13:57:46 kafka | [2024-01-22 13:55:19,137] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 13:57:46 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.739032116Z level=info msg="Executing migration" id="recreate alert_definition table" 13:57:46 kafka | [2024-01-22 13:55:19,137] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 13:57:46 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 13:57:46 policy-pap | sasl.login.callback.handler.class = null 13:57:46 policy-apex-pdp | security.protocol = PLAINTEXT 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.740442683Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.410147ms 13:57:46 kafka | [2024-01-22 13:55:19,144] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | sasl.login.class = null 13:57:46 policy-apex-pdp | security.providers = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.744195881Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 13:57:46 kafka | [2024-01-22 13:55:19,145] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 13:57:46 policy-pap | sasl.login.connect.timeout.ms = null 13:57:46 policy-apex-pdp | send.buffer.bytes = 131072 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.745257679Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.062018ms 13:57:46 kafka | [2024-01-22 13:55:19,148] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | sasl.login.read.timeout.ms = null 13:57:46 policy-apex-pdp | session.timeout.ms = 45000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.750337672Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 13:57:46 kafka | [2024-01-22 13:55:19,154] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | sasl.login.refresh.buffer.seconds = 300 13:57:46 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.751328678Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=989.186µs 13:57:46 kafka | [2024-01-22 13:55:19,178] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | sasl.login.refresh.min.period.seconds = 60 13:57:46 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.754971254Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 13:57:46 kafka | [2024-01-22 13:55:19,179] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 13:57:46 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 13:57:46 policy-pap | sasl.login.refresh.window.factor = 0.8 13:57:46 policy-apex-pdp | ssl.cipher.suites = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.755112337Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=142.324µs 13:57:46 kafka | [2024-01-22 13:55:19,185] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | sasl.login.refresh.window.jitter = 0.05 13:57:46 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.758017773Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 13:57:46 kafka | [2024-01-22 13:55:19,185] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 13:57:46 policy-pap | sasl.login.retry.backoff.max.ms = 10000 13:57:46 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.758970678Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=952.695µs 13:57:46 kafka | [2024-01-22 13:55:19,185] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | sasl.login.retry.backoff.ms = 100 13:57:46 policy-apex-pdp | ssl.engine.factory.class = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.762956553Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 13:57:46 kafka | [2024-01-22 13:55:19,192] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | sasl.mechanism = GSSAPI 13:57:46 policy-apex-pdp | ssl.key.password = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.764016511Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.057228ms 13:57:46 kafka | [2024-01-22 13:55:19,192] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 13:57:46 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.768532449Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 13:57:46 kafka | [2024-01-22 13:55:19,192] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 13:57:46 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 13:57:46 policy-pap | sasl.oauthbearer.expected.audience = null 13:57:46 policy-apex-pdp | ssl.keystore.certificate.chain = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.769517265Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=983.126µs 13:57:46 kafka | [2024-01-22 13:55:19,192] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | sasl.oauthbearer.expected.issuer = null 13:57:46 policy-apex-pdp | ssl.keystore.key = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.773977922Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 13:57:46 kafka | [2024-01-22 13:55:19,193] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:57:46 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:57:46 policy-apex-pdp | ssl.keystore.location = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.774966908Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=988.336µs 13:57:46 kafka | [2024-01-22 13:55:19,267] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 13:57:46 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:57:46 policy-apex-pdp | ssl.keystore.password = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.777813003Z level=info msg="Executing migration" id="Add column paused in alert_definition" 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:57:46 policy-apex-pdp | ssl.keystore.type = JKS 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.783491761Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.678378ms 13:57:46 kafka | [2024-01-22 13:55:19,272] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 13:57:46 policy-apex-pdp | ssl.protocol = TLSv1.3 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.824658751Z level=info msg="Executing migration" id="drop alert_definition table" 13:57:46 kafka | [2024-01-22 13:55:19,307] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 13:57:46 policy-pap | sasl.oauthbearer.scope.claim.name = scope 13:57:46 policy-apex-pdp | ssl.provider = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.82612591Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.466338ms 13:57:46 kafka | [2024-01-22 13:55:19,350] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | sasl.oauthbearer.sub.claim.name = sub 13:57:46 policy-apex-pdp | ssl.secure.random.implementation = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.831444889Z level=info msg="Executing migration" id="delete alert_definition_version table" 13:57:46 kafka | [2024-01-22 13:55:24,351] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 13:57:46 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 13:57:46 policy-pap | sasl.oauthbearer.token.endpoint.url = null 13:57:46 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.831708046Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=260.937µs 13:57:46 kafka | [2024-01-22 13:55:24,352] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | security.protocol = PLAINTEXT 13:57:46 policy-apex-pdp | ssl.truststore.certificates = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.834873639Z level=info msg="Executing migration" id="recreate alert_definition_version table" 13:57:46 kafka | [2024-01-22 13:55:48,946] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:57:46 policy-pap | security.providers = null 13:57:46 policy-apex-pdp | ssl.truststore.location = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.835792893Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=918.984µs 13:57:46 kafka | [2024-01-22 13:55:48,952] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | send.buffer.bytes = 131072 13:57:46 policy-apex-pdp | ssl.truststore.password = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.839258444Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 13:57:46 kafka | [2024-01-22 13:55:48,954] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | session.timeout.ms = 45000 13:57:46 policy-apex-pdp | ssl.truststore.type = JKS 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.840312252Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.053318ms 13:57:46 kafka | [2024-01-22 13:55:48,958] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | socket.connection.setup.timeout.max.ms = 30000 13:57:46 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:57:46 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 13:57:46 policy-pap | socket.connection.setup.timeout.ms = 10000 13:57:46 kafka | [2024-01-22 13:55:48,994] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(RF6sJHOeSLeKzNa2An6Amw),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.847451369Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 13:57:46 policy-apex-pdp | 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | ssl.cipher.suites = null 13:57:46 kafka | [2024-01-22 13:55:48,994] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.848495976Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.044577ms 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.751+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:57:46 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:57:46 kafka | [2024-01-22 13:55:48,996] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.852096841Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.751+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | ssl.endpoint.identification.algorithm = https 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.852363128Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=266.647µs 13:57:46 kafka | [2024-01-22 13:55:48,997] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.752+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705931749751 13:57:46 policy-db-migrator | 13:57:46 policy-pap | ssl.engine.factory.class = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.855810828Z level=info msg="Executing migration" id="drop alert_definition_version table" 13:57:46 kafka | [2024-01-22 13:55:49,001] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.752+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Subscribed to topic(s): policy-pdp-pap 13:57:46 policy-db-migrator | 13:57:46 policy-pap | ssl.key.password = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.857303907Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.491959ms 13:57:46 kafka | [2024-01-22 13:55:49,001] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.753+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=793310ba-b44a-41bd-a3a3-fc0762926d3d, alive=false, publisher=null]]: starting 13:57:46 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 13:57:46 policy-pap | ssl.keymanager.algorithm = SunX509 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.864477365Z level=info msg="Executing migration" id="create alert_instance table" 13:57:46 kafka | [2024-01-22 13:55:49,045] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.830+00:00|INFO|ProducerConfig|main] ProducerConfig values: 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | ssl.keystore.certificate.chain = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.86542526Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=945.765µs 13:57:46 kafka | [2024-01-22 13:55:49,048] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 13:57:46 policy-apex-pdp | acks = -1 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:57:46 policy-pap | ssl.keystore.key = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.8684487Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 13:57:46 kafka | [2024-01-22 13:55:49,049] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 13:57:46 policy-apex-pdp | auto.include.jmx.reporter = true 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | ssl.keystore.location = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.869447166Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=998.437µs 13:57:46 kafka | [2024-01-22 13:55:49,051] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 13:57:46 policy-apex-pdp | batch.size = 16384 13:57:46 policy-db-migrator | 13:57:46 policy-pap | ssl.keystore.password = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.875534375Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 13:57:46 kafka | [2024-01-22 13:55:49,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-apex-pdp | bootstrap.servers = [kafka:9092] 13:57:46 policy-db-migrator | 13:57:46 policy-pap | ssl.keystore.type = JKS 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.876541252Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.006477ms 13:57:46 kafka | [2024-01-22 13:55:49,052] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 13:57:46 policy-apex-pdp | buffer.memory = 33554432 13:57:46 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 13:57:46 policy-pap | ssl.protocol = TLSv1.3 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.880079035Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 13:57:46 kafka | [2024-01-22 13:55:49,063] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) 13:57:46 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | ssl.provider = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.887599032Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=7.519438ms 13:57:46 kafka | [2024-01-22 13:55:49,064] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 13:57:46 policy-apex-pdp | client.id = producer-1 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 13:57:46 policy-pap | ssl.secure.random.implementation = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.890894128Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 13:57:46 kafka | [2024-01-22 13:55:49,065] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(Zh415qvLQvmHe6oa34REOg),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 13:57:46 policy-apex-pdp | compression.type = none 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | ssl.trustmanager.algorithm = PKIX 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.892029348Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.13482ms 13:57:46 kafka | [2024-01-22 13:55:49,065] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 13:57:46 policy-apex-pdp | connections.max.idle.ms = 540000 13:57:46 policy-db-migrator | 13:57:46 policy-pap | ssl.truststore.certificates = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.897016279Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 13:57:46 kafka | [2024-01-22 13:55:49,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-apex-pdp | delivery.timeout.ms = 120000 13:57:46 policy-db-migrator | 13:57:46 policy-pap | ssl.truststore.location = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.897684876Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=668.707µs 13:57:46 kafka | [2024-01-22 13:55:49,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-apex-pdp | enable.idempotence = true 13:57:46 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 13:57:46 policy-pap | ssl.truststore.password = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.900521491Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 13:57:46 kafka | [2024-01-22 13:55:49,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-apex-pdp | interceptor.classes = [] 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | ssl.truststore.type = JKS 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.935807306Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=35.283735ms 13:57:46 kafka | [2024-01-22 13:55:49,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 13:57:46 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.938951938Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 13:57:46 kafka | [2024-01-22 13:55:49,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-apex-pdp | linger.ms = 0 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.976567435Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=37.614097ms 13:57:46 kafka | [2024-01-22 13:55:49,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-apex-pdp | max.block.ms = 60000 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:55:46.459+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.981097134Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 13:57:46 kafka | [2024-01-22 13:55:49,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-apex-pdp | max.in.flight.requests.per.connection = 5 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:55:46.460+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.981849944Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=751.689µs 13:57:46 kafka | [2024-01-22 13:55:49,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-apex-pdp | max.request.size = 1048576 13:57:46 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 13:57:46 policy-pap | [2024-01-22T13:55:46.460+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705931746459 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.984818021Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 13:57:46 kafka | [2024-01-22 13:55:49,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-apex-pdp | metadata.max.age.ms = 300000 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:55:46.460+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.985827368Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.008387ms 13:57:46 kafka | [2024-01-22 13:55:49,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-apex-pdp | metadata.max.idle.ms = 300000 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.989892784Z level=info msg="Executing migration" id="add current_reason column related to current_state" 13:57:46 kafka | [2024-01-22 13:55:49,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:46.816+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 13:57:46 policy-apex-pdp | metric.reporters = [] 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:07.995580884Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.68757ms 13:57:46 kafka | [2024-01-22 13:55:49,068] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:47.004+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 13:57:46 policy-apex-pdp | metrics.num.samples = 2 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.000351109Z level=info msg="Executing migration" id="create alert_rule table" 13:57:46 kafka | [2024-01-22 13:55:49,068] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:47.261+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@1be4a7e3, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@632b96b8, org.springframework.security.web.context.SecurityContextHolderFilter@8091d80, org.springframework.security.web.header.HeaderWriterFilter@3909308c, org.springframework.security.web.authentication.logout.LogoutFilter@2ffcdc9b, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@41463c56, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@6958d5d0, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@7169d668, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@544e6b, org.springframework.security.web.access.ExceptionTranslationFilter@2e2cd42c, org.springframework.security.web.access.intercept.AuthorizationFilter@1adf387e] 13:57:46 policy-apex-pdp | metrics.recording.level = INFO 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.001737495Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.385716ms 13:57:46 kafka | [2024-01-22 13:55:49,068] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:48.104+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 13:57:46 policy-apex-pdp | metrics.sample.window.ms = 30000 13:57:46 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,068] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:48.228+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 13:57:46 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 13:57:46 kafka | [2024-01-22 13:55:49,068] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:48.261+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 13:57:46 policy-apex-pdp | partitioner.availability.timeout.ms = 0 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,068] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:48.281+00:00|INFO|ServiceManager|main] Policy PAP starting 13:57:46 policy-apex-pdp | partitioner.class = null 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,069] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:48.281+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 13:57:46 policy-apex-pdp | partitioner.ignore.keys = false 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,069] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:48.283+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 13:57:46 policy-apex-pdp | receive.buffer.bytes = 32768 13:57:46 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 13:57:46 kafka | [2024-01-22 13:55:49,069] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:48.283+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 13:57:46 policy-apex-pdp | reconnect.backoff.max.ms = 1000 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,069] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:48.283+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 13:57:46 policy-apex-pdp | reconnect.backoff.ms = 50 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:57:46 kafka | [2024-01-22 13:55:49,069] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:48.284+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 13:57:46 policy-apex-pdp | request.timeout.ms = 30000 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,069] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:48.284+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 13:57:46 policy-apex-pdp | retries = 2147483647 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,069] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:48.290+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=79c954dd-4645-472b-b928-ee2d4186f7c1, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@6af29394 13:57:46 policy-apex-pdp | retry.backoff.ms = 100 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,069] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:48.302+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=79c954dd-4645-472b-b928-ee2d4186f7c1, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 13:57:46 policy-apex-pdp | sasl.client.callback.handler.class = null 13:57:46 policy-db-migrator | > upgrade 0450-pdpgroup.sql 13:57:46 kafka | [2024-01-22 13:55:49,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:48.303+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 13:57:46 policy-apex-pdp | sasl.jaas.config = null 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | allow.auto.create.topics = true 13:57:46 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 13:57:46 kafka | [2024-01-22 13:55:49,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | auto.commit.interval.ms = 5000 13:57:46 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | auto.include.jmx.reporter = true 13:57:46 policy-apex-pdp | sasl.kerberos.service.name = null 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | auto.offset.reset = latest 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 13:57:46 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 13:57:46 kafka | [2024-01-22 13:55:49,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | bootstrap.servers = [kafka:9092] 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | check.crcs = true 13:57:46 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:57:46 kafka | [2024-01-22 13:55:49,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | client.dns.lookup = use_all_dns_ips 13:57:46 policy-apex-pdp | sasl.login.callback.handler.class = null 13:57:46 kafka | [2024-01-22 13:55:49,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | client.id = consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-apex-pdp | sasl.login.class = null 13:57:46 kafka | [2024-01-22 13:55:49,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | client.rack = 13:57:46 policy-db-migrator | 13:57:46 policy-apex-pdp | sasl.login.connect.timeout.ms = null 13:57:46 kafka | [2024-01-22 13:55:49,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | connections.max.idle.ms = 540000 13:57:46 policy-db-migrator | 13:57:46 policy-apex-pdp | sasl.login.read.timeout.ms = null 13:57:46 policy-pap | default.api.timeout.ms = 60000 13:57:46 kafka | [2024-01-22 13:55:49,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-db-migrator | > upgrade 0470-pdp.sql 13:57:46 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 13:57:46 policy-pap | enable.auto.commit = true 13:57:46 kafka | [2024-01-22 13:55:49,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 13:57:46 policy-pap | exclude.internal.topics = true 13:57:46 kafka | [2024-01-22 13:55:49,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:57:46 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 13:57:46 policy-pap | fetch.max.bytes = 52428800 13:57:46 kafka | [2024-01-22 13:55:49,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 13:57:46 policy-pap | fetch.max.wait.ms = 500 13:57:46 kafka | [2024-01-22 13:55:49,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 13:57:46 policy-pap | fetch.min.bytes = 1 13:57:46 kafka | [2024-01-22 13:55:49,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 13:57:46 policy-pap | group.id = 79c954dd-4645-472b-b928-ee2d4186f7c1 13:57:46 policy-pap | group.instance.id = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.006998824Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.008181829Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.180896ms 13:57:46 policy-apex-pdp | sasl.mechanism = GSSAPI 13:57:46 policy-pap | heartbeat.interval.ms = 3000 13:57:46 kafka | [2024-01-22 13:55:49,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.011347508Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 13:57:46 policy-pap | interceptor.classes = [] 13:57:46 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 13:57:46 kafka | [2024-01-22 13:55:49,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.012517531Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.169863ms 13:57:46 policy-pap | internal.leave.group.on.close = true 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,072] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.021329499Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 13:57:46 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 13:57:46 kafka | [2024-01-22 13:55:49,072] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.022389189Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.05678ms 13:57:46 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,072] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | isolation.level = read_uncommitted 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.025688832Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 13:57:46 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,072] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.025776424Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=88.392µs 13:57:46 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,072] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 13:57:46 policy-pap | max.partition.fetch.bytes = 1048576 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.028695006Z level=info msg="Executing migration" id="add column for to alert_rule" 13:57:46 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 13:57:46 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 13:57:46 kafka | [2024-01-22 13:55:49,072] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 13:57:46 policy-pap | max.poll.interval.ms = 300000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.037079612Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=8.385566ms 13:57:46 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,074] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-pap | max.poll.records = 500 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.041518647Z level=info msg="Executing migration" id="add column annotations to alert_rule" 13:57:46 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 13:57:46 kafka | [2024-01-22 13:55:49,074] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-pap | metadata.max.age.ms = 300000 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.047329351Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=5.810263ms 13:57:46 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 13:57:46 kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-pap | metric.reporters = [] 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.050841689Z level=info msg="Executing migration" id="add column labels to alert_rule" 13:57:46 kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-apex-pdp | security.protocol = PLAINTEXT 13:57:46 policy-pap | metrics.num.samples = 2 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.056797737Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.953218ms 13:57:46 kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-apex-pdp | security.providers = null 13:57:46 policy-pap | metrics.recording.level = INFO 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.060062089Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 13:57:46 kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-apex-pdp | send.buffer.bytes = 131072 13:57:46 policy-pap | metrics.sample.window.ms = 30000 13:57:46 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.061081557Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.019498ms 13:57:46 kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 13:57:46 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.064834333Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 13:57:46 kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 13:57:46 policy-pap | receive.buffer.bytes = 65536 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:57:46 kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.065937564Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.099551ms 13:57:46 policy-apex-pdp | ssl.cipher.suites = null 13:57:46 policy-pap | reconnect.backoff.max.ms = 1000 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.069525295Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 13:57:46 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:57:46 policy-pap | reconnect.backoff.ms = 50 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.079486425Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=9.96048ms 13:57:46 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 13:57:46 policy-pap | request.timeout.ms = 30000 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.082798118Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 13:57:46 policy-apex-pdp | ssl.engine.factory.class = null 13:57:46 policy-pap | retry.backoff.ms = 100 13:57:46 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 13:57:46 kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.088664313Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=5.866595ms 13:57:46 policy-apex-pdp | ssl.key.password = null 13:57:46 policy-pap | sasl.client.callback.handler.class = null 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.092356217Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 13:57:46 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 13:57:46 policy-pap | sasl.jaas.config = null 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 13:57:46 kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.09314779Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=790.992µs 13:57:46 policy-apex-pdp | ssl.keystore.certificate.chain = null 13:57:46 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.095631159Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 13:57:46 policy-apex-pdp | ssl.keystore.key = null 13:57:46 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.099846188Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=4.214739ms 13:57:46 policy-apex-pdp | ssl.keystore.location = null 13:57:46 policy-pap | sasl.kerberos.service.name = null 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.102880393Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 13:57:46 policy-apex-pdp | ssl.keystore.password = null 13:57:46 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 13:57:46 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 13:57:46 kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.109613173Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.73212ms 13:57:46 policy-apex-pdp | ssl.keystore.type = JKS 13:57:46 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.116364483Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 13:57:46 policy-apex-pdp | ssl.protocol = TLSv1.3 13:57:46 policy-pap | sasl.login.callback.handler.class = null 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 13:57:46 kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.116515127Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=150.424µs 13:57:46 policy-apex-pdp | ssl.provider = null 13:57:46 policy-pap | sasl.login.class = null 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-apex-pdp | ssl.secure.random.implementation = null 13:57:46 policy-pap | sasl.login.connect.timeout.ms = null 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.124120751Z level=info msg="Executing migration" id="create alert_rule_version table" 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 13:57:46 policy-pap | sasl.login.read.timeout.ms = null 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.125833069Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.711968ms 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-apex-pdp | ssl.truststore.certificates = null 13:57:46 policy-pap | sasl.login.refresh.buffer.seconds = 300 13:57:46 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.129306197Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-apex-pdp | ssl.truststore.location = null 13:57:46 policy-pap | sasl.login.refresh.min.period.seconds = 60 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.130455849Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.148922ms 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-apex-pdp | ssl.truststore.password = null 13:57:46 policy-pap | sasl.login.refresh.window.factor = 0.8 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.133587657Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-apex-pdp | ssl.truststore.type = JKS 13:57:46 policy-pap | sasl.login.refresh.window.jitter = 0.05 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.13476937Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.181283ms 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-apex-pdp | transaction.timeout.ms = 60000 13:57:46 policy-pap | sasl.login.retry.backoff.max.ms = 10000 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.13900684Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 13:57:46 policy-apex-pdp | transactional.id = null 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-pap | sasl.login.retry.backoff.ms = 100 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.139165794Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=159.294µs 13:57:46 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-pap | sasl.mechanism = GSSAPI 13:57:46 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.141768127Z level=info msg="Executing migration" id="add column for to alert_rule_version" 13:57:46 policy-apex-pdp | 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.148343682Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.575325ms 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.854+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-pap | sasl.oauthbearer.expected.audience = null 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.15147063Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.881+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-pap | sasl.oauthbearer.expected.issuer = null 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.158322663Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.851653ms 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.881+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.161221205Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.881+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705931749881 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.165706231Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.482496ms 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.882+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=793310ba-b44a-41bd-a3a3-fc0762926d3d, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:57:46 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.168310314Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.882+00:00|INFO|ServiceManager|main] service manager starting set alive 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.174432436Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.122112ms 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.882+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-pap | sasl.oauthbearer.scope.claim.name = scope 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.246637278Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.886+00:00|INFO|ServiceManager|main] service manager starting topic sinks 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-pap | sasl.oauthbearer.sub.claim.name = sub 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.258676086Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=12.034549ms 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.886+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-pap | sasl.oauthbearer.token.endpoint.url = null 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.265423036Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.892+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-pap | security.protocol = PLAINTEXT 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.265500148Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=78.182µs 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.892+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-pap | security.providers = null 13:57:46 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.270980042Z level=info msg="Executing migration" id=create_alert_configuration_table 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.892+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-pap | send.buffer.bytes = 131072 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.272087454Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.105331ms 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-pap | session.timeout.ms = 45000 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.892+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=e65163a7-0954-4bf8-9924-8c41fa40f9af, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4ee37ca3 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.276115637Z level=info msg="Executing migration" id="Add column default in alert_configuration" 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-pap | socket.connection.setup.timeout.max.ms = 30000 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.893+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=e65163a7-0954-4bf8-9924-8c41fa40f9af, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.285565943Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=9.446935ms 13:57:46 policy-pap | socket.connection.setup.timeout.ms = 10000 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.893+00:00|INFO|ServiceManager|main] service manager starting Create REST server 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.291671815Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.914+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.291906661Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=237.167µs 13:57:46 policy-pap | ssl.cipher.suites = null 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-apex-pdp | [] 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.298795685Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 13:57:46 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-apex-pdp | [2024-01-22T13:55:49.926+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.303354963Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.563068ms 13:57:46 policy-pap | ssl.endpoint.identification.algorithm = https 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-db-migrator | > upgrade 0570-toscadatatype.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.306004848Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 13:57:46 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"7bc51e59-0fe3-438a-aab9-b5da4616d765","timestampMs":1705931749893,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup"} 13:57:46 policy-pap | ssl.engine.factory.class = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.306709688Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=706.72µs 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 13:57:46 policy-apex-pdp | [2024-01-22T13:55:50.140+00:00|INFO|ServiceManager|main] service manager starting Rest Server 13:57:46 policy-pap | ssl.key.password = null 13:57:46 kafka | [2024-01-22 13:55:49,077] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-apex-pdp | [2024-01-22T13:55:50.141+00:00|INFO|ServiceManager|main] service manager starting 13:57:46 policy-pap | ssl.keymanager.algorithm = SunX509 13:57:46 kafka | [2024-01-22 13:55:49,087] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 13:57:46 policy-apex-pdp | [2024-01-22T13:55:50.141+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 13:57:46 policy-pap | ssl.keystore.certificate.chain = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.309109785Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,088] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) 13:57:46 policy-pap | ssl.keystore.key = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.317985115Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=8.87586ms 13:57:46 policy-db-migrator | 13:57:46 policy-apex-pdp | [2024-01-22T13:55:50.141+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@4628b1d3{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@77cf3f8b{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:57:46 policy-pap | ssl.keystore.location = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.321501974Z level=info msg="Executing migration" id=create_ngalert_configuration_table 13:57:46 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 13:57:46 policy-apex-pdp | [2024-01-22T13:55:50.159+00:00|INFO|ServiceManager|main] service manager started 13:57:46 kafka | [2024-01-22 13:55:49,097] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 13:57:46 policy-pap | ssl.keystore.password = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.322238954Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=736.78µs 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-apex-pdp | [2024-01-22T13:55:50.160+00:00|INFO|ServiceManager|main] service manager started 13:57:46 kafka | [2024-01-22 13:55:49,207] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-pap | ssl.keystore.type = JKS 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.325165507Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 13:57:46 policy-apex-pdp | [2024-01-22T13:55:50.160+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 13:57:46 kafka | [2024-01-22 13:55:49,220] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 13:57:46 policy-pap | ssl.protocol = TLSv1.3 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.326144964Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=979.317µs 13:57:46 kafka | [2024-01-22 13:55:49,232] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 13:57:46 policy-apex-pdp | [2024-01-22T13:55:50.162+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@4628b1d3{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@77cf3f8b{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 13:57:46 policy-pap | ssl.provider = null 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.329040586Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 13:57:46 kafka | [2024-01-22 13:55:49,235] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-apex-pdp | [2024-01-22T13:55:50.341+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Cluster ID: YXDHh3LaSIyP8FezJr0IvQ 13:57:46 policy-pap | ssl.secure.random.implementation = null 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.336827235Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=7.785649ms 13:57:46 kafka | [2024-01-22 13:55:49,237] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(RF6sJHOeSLeKzNa2An6Amw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-apex-pdp | [2024-01-22T13:55:50.341+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: YXDHh3LaSIyP8FezJr0IvQ 13:57:46 policy-pap | ssl.trustmanager.algorithm = PKIX 13:57:46 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.340318613Z level=info msg="Executing migration" id="create provenance_type table" 13:57:46 kafka | [2024-01-22 13:55:49,262] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 13:57:46 policy-apex-pdp | [2024-01-22T13:55:50.343+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 13:57:46 policy-pap | ssl.truststore.certificates = null 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.341071844Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=752.421µs 13:57:46 kafka | [2024-01-22 13:55:49,270] INFO [Broker id=1] Finished LeaderAndIsr request in 209ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) 13:57:46 policy-apex-pdp | [2024-01-22T13:55:50.565+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 13:57:46 policy-pap | ssl.truststore.location = null 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.344904632Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 13:57:46 policy-apex-pdp | [2024-01-22T13:55:50.570+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] (Re-)joining group 13:57:46 policy-pap | ssl.truststore.password = null 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,275] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=RF6sJHOeSLeKzNa2An6Amw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.34591327Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.008448ms 13:57:46 policy-apex-pdp | [2024-01-22T13:55:50.582+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Request joining group due to: need to re-join with the given member-id: consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2-bd794398-55d4-4516-bbe7-133fbc5867a3 13:57:46 policy-pap | ssl.truststore.type = JKS 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.350340825Z level=info msg="Executing migration" id="create alert_image table" 13:57:46 policy-apex-pdp | [2024-01-22T13:55:50.583+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 13:57:46 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.351054455Z level=info msg="Migration successfully executed" id="create alert_image table" duration=713.62µs 13:57:46 policy-apex-pdp | [2024-01-22T13:55:50.583+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] (Re-)joining group 13:57:46 policy-pap | 13:57:46 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 13:57:46 kafka | [2024-01-22 13:55:49,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-apex-pdp | [2024-01-22T13:55:50.928+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 13:57:46 policy-pap | [2024-01-22T13:55:48.309+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.353484843Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 13:57:46 policy-apex-pdp | [2024-01-22T13:55:50.928+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 13:57:46 policy-pap | [2024-01-22T13:55:48.309+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.35442486Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=940.147µs 13:57:46 kafka | [2024-01-22 13:55:49,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-apex-pdp | [2024-01-22T13:55:53.595+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Successfully joined group with generation Generation{generationId=1, memberId='consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2-bd794398-55d4-4516-bbe7-133fbc5867a3', protocol='range'} 13:57:46 policy-pap | [2024-01-22T13:55:48.309+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705931748309 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.357466415Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 13:57:46 kafka | [2024-01-22 13:55:49,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-apex-pdp | [2024-01-22T13:55:53.602+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Finished assignment for group at generation 1: {consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2-bd794398-55d4-4516-bbe7-133fbc5867a3=Assignment(partitions=[policy-pdp-pap-0])} 13:57:46 policy-pap | [2024-01-22T13:55:48.309+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Subscribed to topic(s): policy-pdp-pap 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.357528697Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=63.092µs 13:57:46 kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-apex-pdp | [2024-01-22T13:55:53.613+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Successfully synced group in generation Generation{generationId=1, memberId='consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2-bd794398-55d4-4516-bbe7-133fbc5867a3', protocol='range'} 13:57:46 policy-pap | [2024-01-22T13:55:48.310+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.363386352Z level=info msg="Executing migration" id=create_alert_configuration_history_table 13:57:46 kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-apex-pdp | [2024-01-22T13:55:53.613+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 13:57:46 policy-pap | [2024-01-22T13:55:48.310+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=43ed8a24-8339-45ee-bd66-a36eac6c670e, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@49c947f7 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.364313828Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=927.296µs 13:57:46 policy-apex-pdp | [2024-01-22T13:55:53.615+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Adding newly assigned partitions: policy-pdp-pap-0 13:57:46 policy-pap | [2024-01-22T13:55:48.310+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=43ed8a24-8339-45ee-bd66-a36eac6c670e, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 13:57:46 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 13:57:46 kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.367167568Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 13:57:46 policy-apex-pdp | [2024-01-22T13:55:53.654+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Found no committed offset for partition policy-pdp-pap-0 13:57:46 policy-pap | [2024-01-22T13:55:48.311+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.368119735Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=955.127µs 13:57:46 policy-apex-pdp | [2024-01-22T13:55:53.671+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 13:57:46 policy-pap | allow.auto.create.topics = true 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 13:57:46 kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.371748207Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 13:57:46 policy-apex-pdp | [2024-01-22T13:55:56.155+00:00|INFO|RequestLog|qtp830863979-31] 172.17.0.2 - policyadmin [22/Jan/2024:13:55:56 +0000] "GET /metrics HTTP/1.1" 200 10639 "-" "Prometheus/2.49.1" 13:57:46 policy-pap | auto.commit.interval.ms = 5000 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.372324063Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 13:57:46 policy-apex-pdp | [2024-01-22T13:56:09.893+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 13:57:46 policy-pap | auto.include.jmx.reporter = true 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.375078941Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 13:57:46 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c841c6d7-f089-47c8-88dd-5f2b47d779a1","timestampMs":1705931769892,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup"} 13:57:46 policy-pap | auto.offset.reset = latest 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.375560795Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=483.153µs 13:57:46 policy-apex-pdp | [2024-01-22T13:56:09.913+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:57:46 policy-pap | bootstrap.servers = [kafka:9092] 13:57:46 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 13:57:46 kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.37858466Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 13:57:46 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c841c6d7-f089-47c8-88dd-5f2b47d779a1","timestampMs":1705931769892,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup"} 13:57:46 policy-pap | check.crcs = true 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.379398422Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=814.442µs 13:57:46 policy-apex-pdp | [2024-01-22T13:56:09.916+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 13:57:46 policy-pap | client.dns.lookup = use_all_dns_ips 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:57:46 kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.383500878Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-apex-pdp | [2024-01-22T13:56:10.084+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:57:46 policy-pap | client.id = consumer-policy-pap-4 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.391941115Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=8.437307ms 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-apex-pdp | {"source":"pap-62c94b3b-66f9-4964-a14e-729bc5920807","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f41ee548-273a-4dd1-a197-a877ac7fd0e5","timestampMs":1705931770028,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:57:46 policy-pap | client.rack = 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.396190155Z level=info msg="Executing migration" id="create library_element table v1" 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-apex-pdp | [2024-01-22T13:56:10.093+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 13:57:46 policy-pap | connections.max.idle.ms = 540000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.397152402Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=964.427µs 13:57:46 policy-db-migrator | > upgrade 0630-toscanodetype.sql 13:57:46 kafka | [2024-01-22 13:55:49,285] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-apex-pdp | [2024-01-22T13:56:10.093+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 13:57:46 policy-pap | default.api.timeout.ms = 60000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.401803763Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,285] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"5cc52511-b48b-4fa0-a6e6-c267e358d5d1","timestampMs":1705931770093,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup"} 13:57:46 policy-pap | enable.auto.commit = true 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.402904424Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.100941ms 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 13:57:46 kafka | [2024-01-22 13:55:49,285] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-apex-pdp | [2024-01-22T13:56:10.095+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 13:57:46 policy-pap | exclude.internal.topics = true 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.408198113Z level=info msg="Executing migration" id="create library_element_connection table v1" 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,285] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f41ee548-273a-4dd1-a197-a877ac7fd0e5","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"29114705-8887-4977-854d-e6da3f475b73","timestampMs":1705931770095,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:57:46 policy-pap | fetch.max.bytes = 52428800 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.408912873Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=715.29µs 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,285] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-apex-pdp | [2024-01-22T13:56:10.117+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:57:46 policy-pap | fetch.max.wait.ms = 500 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.412741761Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,285] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"5cc52511-b48b-4fa0-a6e6-c267e358d5d1","timestampMs":1705931770093,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup"} 13:57:46 policy-pap | fetch.min.bytes = 1 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.413986026Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.244725ms 13:57:46 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 13:57:46 kafka | [2024-01-22 13:55:49,285] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-apex-pdp | [2024-01-22T13:56:10.118+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 13:57:46 policy-pap | group.id = policy-pap 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.420321734Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,285] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-pap | group.instance.id = null 13:57:46 policy-apex-pdp | [2024-01-22T13:56:10.126+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:57:46 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f41ee548-273a-4dd1-a197-a877ac7fd0e5","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"29114705-8887-4977-854d-e6da3f475b73","timestampMs":1705931770095,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 13:57:46 policy-pap | heartbeat.interval.ms = 3000 13:57:46 kafka | [2024-01-22 13:55:49,285] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-apex-pdp | [2024-01-22T13:56:10.127+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 13:57:46 policy-apex-pdp | [2024-01-22T13:56:10.160+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | interceptor.classes = [] 13:57:46 kafka | [2024-01-22 13:55:49,285] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-apex-pdp | {"source":"pap-62c94b3b-66f9-4964-a14e-729bc5920807","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"1e11feec-3c6a-4861-a178-a1d471866c80","timestampMs":1705931770028,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:57:46 policy-apex-pdp | [2024-01-22T13:56:10.163+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 13:57:46 policy-db-migrator | 13:57:46 policy-pap | internal.leave.group.on.close = true 13:57:46 kafka | [2024-01-22 13:55:49,285] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"1e11feec-3c6a-4861-a178-a1d471866c80","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"029c38c3-9b61-40d5-84a1-9b0554ef65e1","timestampMs":1705931770162,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:57:46 policy-apex-pdp | [2024-01-22T13:56:10.175+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:57:46 policy-db-migrator | 13:57:46 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 13:57:46 kafka | [2024-01-22 13:55:49,285] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"1e11feec-3c6a-4861-a178-a1d471866c80","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"029c38c3-9b61-40d5-84a1-9b0554ef65e1","timestampMs":1705931770162,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:57:46 policy-apex-pdp | [2024-01-22T13:56:10.176+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 13:57:46 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 13:57:46 policy-pap | isolation.level = read_uncommitted 13:57:46 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-apex-pdp | [2024-01-22T13:56:10.217+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:57:46 policy-apex-pdp | {"source":"pap-62c94b3b-66f9-4964-a14e-729bc5920807","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8cc3dde7-8a50-459b-a008-976e7631331f","timestampMs":1705931770192,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:57:46 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-apex-pdp | [2024-01-22T13:56:10.218+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 13:57:46 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"8cc3dde7-8a50-459b-a008-976e7631331f","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"49cfa103-b65d-4118-9a66-07d688cd7200","timestampMs":1705931770218,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:57:46 policy-pap | max.partition.fetch.bytes = 1048576 13:57:46 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-apex-pdp | [2024-01-22T13:56:10.227+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:57:46 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"8cc3dde7-8a50-459b-a008-976e7631331f","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"49cfa103-b65d-4118-9a66-07d688cd7200","timestampMs":1705931770218,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | max.poll.interval.ms = 300000 13:57:46 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.421778595Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.457191ms 13:57:46 policy-apex-pdp | [2024-01-22T13:56:10.228+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 13:57:46 policy-db-migrator | 13:57:46 policy-pap | max.poll.records = 500 13:57:46 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.425686895Z level=info msg="Executing migration" id="increase max description length to 2048" 13:57:46 policy-apex-pdp | [2024-01-22T13:56:56.082+00:00|INFO|RequestLog|qtp830863979-28] 172.17.0.2 - policyadmin [22/Jan/2024:13:56:56 +0000] "GET /metrics HTTP/1.1" 200 10647 "-" "Prometheus/2.49.1" 13:57:46 policy-db-migrator | 13:57:46 policy-pap | metadata.max.age.ms = 300000 13:57:46 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.425715355Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=31.64µs 13:57:46 policy-db-migrator | > upgrade 0660-toscaparameter.sql 13:57:46 policy-pap | metric.reporters = [] 13:57:46 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.432501256Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | metrics.num.samples = 2 13:57:46 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.43261546Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=120.394µs 13:57:46 policy-pap | metrics.recording.level = INFO 13:57:46 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.437306752Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 13:57:46 policy-pap | metrics.sample.window.ms = 30000 13:57:46 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.437935509Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=629.948µs 13:57:46 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 13:57:46 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.463152629Z level=info msg="Executing migration" id="create data_keys table" 13:57:46 policy-pap | receive.buffer.bytes = 65536 13:57:46 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.464692092Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.542973ms 13:57:46 policy-pap | reconnect.backoff.max.ms = 1000 13:57:46 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-db-migrator | > upgrade 0670-toscapolicies.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.472653816Z level=info msg="Executing migration" id="create secrets table" 13:57:46 policy-pap | reconnect.backoff.ms = 50 13:57:46 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.473252313Z level=info msg="Migration successfully executed" id="create secrets table" duration=599.707µs 13:57:46 policy-pap | request.timeout.ms = 30000 13:57:46 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.478493Z level=info msg="Executing migration" id="rename data_keys name column to id" 13:57:46 policy-pap | retry.backoff.ms = 100 13:57:46 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.526527092Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=48.032331ms 13:57:46 policy-pap | sasl.client.callback.handler.class = null 13:57:46 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.531026858Z level=info msg="Executing migration" id="add name column into data_keys" 13:57:46 policy-pap | sasl.jaas.config = null 13:57:46 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.539125466Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=8.095428ms 13:57:46 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:57:46 kafka | [2024-01-22 13:55:49,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 13:57:46 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.542996835Z level=info msg="Executing migration" id="copy data_keys id column values into name" 13:57:46 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 13:57:46 kafka | [2024-01-22 13:55:49,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | sasl.kerberos.service.name = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.543196551Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=197.635µs 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:57:46 kafka | [2024-01-22 13:55:49,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 13:57:46 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.54567328Z level=info msg="Executing migration" id="rename data_keys name column to label" 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 13:57:46 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.595851412Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=50.172242ms 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 13:57:46 policy-pap | sasl.login.callback.handler.class = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.803276087Z level=info msg="Executing migration" id="rename data_keys id column back to name" 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 13:57:46 policy-pap | sasl.login.class = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.859097928Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=55.82038ms 13:57:46 policy-db-migrator | > upgrade 0690-toscapolicy.sql 13:57:46 kafka | [2024-01-22 13:55:49,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 13:57:46 policy-pap | sasl.login.connect.timeout.ms = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.863120631Z level=info msg="Executing migration" id="create kv_store table v1" 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 13:57:46 policy-pap | sasl.login.read.timeout.ms = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.863907283Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=790.222µs 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 13:57:46 kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 13:57:46 policy-pap | sasl.login.refresh.buffer.seconds = 300 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.867557266Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 13:57:46 policy-pap | sasl.login.refresh.min.period.seconds = 60 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.868505722Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=947.786µs 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 13:57:46 policy-pap | sasl.login.refresh.window.factor = 0.8 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.87303759Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 13:57:46 policy-pap | sasl.login.refresh.window.jitter = 0.05 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.873369159Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=330.929µs 13:57:46 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 13:57:46 kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 13:57:46 policy-pap | sasl.login.retry.backoff.max.ms = 10000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.877442214Z level=info msg="Executing migration" id="create permission table" 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 13:57:46 policy-pap | sasl.login.retry.backoff.ms = 100 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.878355689Z level=info msg="Migration successfully executed" id="create permission table" duration=912.825µs 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 13:57:46 kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 13:57:46 policy-pap | sasl.mechanism = GSSAPI 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.883609087Z level=info msg="Executing migration" id="add unique index permission.role_id" 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 13:57:46 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.884935165Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.325767ms 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 13:57:46 policy-pap | sasl.oauthbearer.expected.audience = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.887641301Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,287] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:57:46 policy-pap | sasl.oauthbearer.expected.issuer = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.888881916Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.240365ms 13:57:46 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 13:57:46 kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 13:57:46 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.892724314Z level=info msg="Executing migration" id="create role table" 13:57:46 kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 13:57:46 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.893571428Z level=info msg="Migration successfully executed" id="create role table" duration=846.513µs 13:57:46 kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.898659651Z level=info msg="Executing migration" id="add column display_name" 13:57:46 policy-db-migrator | 13:57:46 policy-pap | sasl.oauthbearer.scope.claim.name = scope 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.907201191Z level=info msg="Migration successfully executed" id="add column display_name" duration=8.54135ms 13:57:46 kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | sasl.oauthbearer.sub.claim.name = sub 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.910893935Z level=info msg="Executing migration" id="add column group_name" 13:57:46 kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 13:57:46 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 13:57:46 policy-pap | sasl.oauthbearer.token.endpoint.url = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.916488312Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.595407ms 13:57:46 kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | security.protocol = PLAINTEXT 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.922981945Z level=info msg="Executing migration" id="add index role.org_id" 13:57:46 kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:57:46 policy-pap | security.providers = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.923998834Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.016798ms 13:57:46 kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | send.buffer.bytes = 131072 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.929970272Z level=info msg="Executing migration" id="add unique index role_org_id_name" 13:57:46 kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | session.timeout.ms = 45000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.931942387Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.971316ms 13:57:46 kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | socket.connection.setup.timeout.max.ms = 30000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.935995761Z level=info msg="Executing migration" id="add index role_org_id_uid" 13:57:46 kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 13:57:46 policy-db-migrator | > upgrade 0730-toscaproperty.sql 13:57:46 policy-pap | socket.connection.setup.timeout.ms = 10000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.937171824Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.175173ms 13:57:46 kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | ssl.cipher.suites = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.942680669Z level=info msg="Executing migration" id="create team role table" 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:57:46 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.943552624Z level=info msg="Migration successfully executed" id="create team role table" duration=871.735µs 13:57:46 kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | ssl.endpoint.identification.algorithm = https 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.947457813Z level=info msg="Executing migration" id="add index team_role.org_id" 13:57:46 kafka | [2024-01-22 13:55:49,288] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | ssl.engine.factory.class = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.948553384Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.094951ms 13:57:46 kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 13:57:46 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 13:57:46 policy-pap | ssl.key.password = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.951934449Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 13:57:46 kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | ssl.keymanager.algorithm = SunX509 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.953225536Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.292317ms 13:57:46 kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 13:57:46 policy-pap | ssl.keystore.certificate.chain = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.956830867Z level=info msg="Executing migration" id="add index team_role.team_id" 13:57:46 kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | ssl.keystore.key = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.958607067Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.77539ms 13:57:46 kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | ssl.keystore.location = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.96260951Z level=info msg="Executing migration" id="create user role table" 13:57:46 kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | ssl.keystore.password = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.963419603Z level=info msg="Migration successfully executed" id="create user role table" duration=808.152µs 13:57:46 kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 13:57:46 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 13:57:46 policy-pap | ssl.keystore.type = JKS 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.967381684Z level=info msg="Executing migration" id="add index user_role.org_id" 13:57:46 kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | ssl.protocol = TLSv1.3 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.968617509Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.232855ms 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 13:57:46 kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 13:57:46 policy-pap | ssl.provider = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.973258449Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 13:57:46 policy-pap | ssl.secure.random.implementation = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.974630778Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.370809ms 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 13:57:46 policy-pap | ssl.trustmanager.algorithm = PKIX 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.977278752Z level=info msg="Executing migration" id="add index user_role.user_id" 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 13:57:46 policy-pap | ssl.truststore.certificates = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.97862678Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.347598ms 13:57:46 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 13:57:46 kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 13:57:46 policy-pap | ssl.truststore.location = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.984702311Z level=info msg="Executing migration" id="create builtin role table" 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 13:57:46 policy-pap | ssl.truststore.password = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.985661728Z level=info msg="Migration successfully executed" id="create builtin role table" duration=960.817µs 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:57:46 kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 13:57:46 policy-pap | ssl.truststore.type = JKS 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.990859604Z level=info msg="Executing migration" id="add index builtin_role.role_id" 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 13:57:46 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.991885653Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.027679ms 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 13:57:46 policy-pap | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.995272249Z level=info msg="Executing migration" id="add index builtin_role.name" 13:57:46 kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:48.316+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.996306688Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.035699ms 13:57:46 policy-pap | [2024-01-22T13:55:48.316+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 13:57:46 policy-db-migrator | > upgrade 0770-toscarequirement.sql 13:57:46 kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:08.999706523Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,288] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.007784114Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.076181ms 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 13:57:46 kafka | [2024-01-22 13:55:49,289] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:48.316+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705931748316 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.013132475Z level=info msg="Executing migration" id="add index builtin_role.org_id" 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:48.316+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.015888908Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=2.763783ms 13:57:46 kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:48.316+00:00|INFO|ServiceManager|main] Policy PAP starting topics 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.068863313Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 13:57:46 kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:48.317+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=43ed8a24-8339-45ee-bd66-a36eac6c670e, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 13:57:46 policy-db-migrator | > upgrade 0780-toscarequirements.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.070776784Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.917031ms 13:57:46 kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:48.317+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=79c954dd-4645-472b-b928-ee2d4186f7c1, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.075780306Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 13:57:46 kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:48.317+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=2e284948-a74b-435c-88b6-e1422c57c262, alive=false, publisher=null]]: starting 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.077109791Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.329925ms 13:57:46 kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:48.334+00:00|INFO|ProducerConfig|main] ProducerConfig values: 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.081441425Z level=info msg="Executing migration" id="add unique index role.uid" 13:57:46 kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-pap | acks = -1 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.082580385Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.13883ms 13:57:46 kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-pap | auto.include.jmx.reporter = true 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.088124931Z level=info msg="Executing migration" id="create seed assignment table" 13:57:46 kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-pap | batch.size = 16384 13:57:46 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.08922793Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.102739ms 13:57:46 kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-pap | bootstrap.servers = [kafka:9092] 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.094335344Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 13:57:46 kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-pap | buffer.memory = 33554432 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.095501735Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.163911ms 13:57:46 kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-pap | client.dns.lookup = use_all_dns_ips 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.103221109Z level=info msg="Executing migration" id="add column hidden to role table" 13:57:46 kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-pap | client.id = producer-1 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.109184966Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=5.966178ms 13:57:46 kafka | [2024-01-22 13:55:49,291] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-pap | compression.type = none 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.112079602Z level=info msg="Executing migration" id="permission kind migration" 13:57:46 kafka | [2024-01-22 13:55:49,291] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-pap | connections.max.idle.ms = 540000 13:57:46 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.121070749Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.989217ms 13:57:46 kafka | [2024-01-22 13:55:49,291] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-pap | delivery.timeout.ms = 120000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.223675972Z level=info msg="Executing migration" id="permission attribute migration" 13:57:46 policy-pap | enable.idempotence = true 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,293] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.239710374Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=16.039292ms 13:57:46 policy-pap | interceptor.classes = [] 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.243140785Z level=info msg="Executing migration" id="permission identifier migration" 13:57:46 kafka | [2024-01-22 13:55:49,293] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,293] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.248802144Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.660959ms 13:57:46 policy-pap | linger.ms = 0 13:57:46 kafka | [2024-01-22 13:55:49,293] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.255185732Z level=info msg="Executing migration" id="add permission identifier index" 13:57:46 policy-pap | max.block.ms = 60000 13:57:46 kafka | [2024-01-22 13:55:49,293] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.256384584Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.198862ms 13:57:46 policy-pap | max.in.flight.requests.per.connection = 5 13:57:46 kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.261448847Z level=info msg="Executing migration" id="create query_history table v1" 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.263586183Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=2.136716ms 13:57:46 kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:57:46 policy-pap | max.request.size = 1048576 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.26952189Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | metadata.max.age.ms = 300000 13:57:46 kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.270894286Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.371996ms 13:57:46 policy-db-migrator | 13:57:46 policy-pap | metadata.max.idle.ms = 300000 13:57:46 kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.275387704Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 13:57:46 policy-db-migrator | 13:57:46 policy-pap | metric.reporters = [] 13:57:46 kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.275548718Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=160.644µs 13:57:46 policy-db-migrator | > upgrade 0820-toscatrigger.sql 13:57:46 policy-pap | metrics.num.samples = 2 13:57:46 kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.278930968Z level=info msg="Executing migration" id="rbac disabled migrator" 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | metrics.recording.level = INFO 13:57:46 kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.27902339Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=91.613µs 13:57:46 policy-pap | metrics.sample.window.ms = 30000 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 13:57:46 kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.282592934Z level=info msg="Executing migration" id="teams permissions migration" 13:57:46 policy-pap | partitioner.adaptive.partitioning.enable = true 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.283100197Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=507.083µs 13:57:46 policy-pap | partitioner.availability.timeout.ms = 0 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.28547304Z level=info msg="Executing migration" id="dashboard permissions" 13:57:46 policy-pap | partitioner.class = null 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.288862639Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=3.390319ms 13:57:46 policy-pap | partitioner.ignore.keys = false 13:57:46 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 13:57:46 kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.299020697Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 13:57:46 policy-pap | receive.buffer.bytes = 32768 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.299749636Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=730.499µs 13:57:46 policy-pap | reconnect.backoff.max.ms = 1000 13:57:46 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 13:57:46 kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.303226898Z level=info msg="Executing migration" id="drop managed folder create actions" 13:57:46 policy-pap | reconnect.backoff.ms = 50 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.303527556Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=300.247µs 13:57:46 policy-pap | request.timeout.ms = 30000 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.307925391Z level=info msg="Executing migration" id="alerting notification permissions" 13:57:46 policy-pap | retries = 2147483647 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-pap | retry.backoff.ms = 100 13:57:46 kafka | [2024-01-22 13:55:49,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.308423325Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=500.963µs 13:57:46 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 13:57:46 policy-pap | sasl.client.callback.handler.class = null 13:57:46 kafka | [2024-01-22 13:55:49,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.312667836Z level=info msg="Executing migration" id="create query_history_star table v1" 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | sasl.jaas.config = null 13:57:46 kafka | [2024-01-22 13:55:49,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.31393526Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.271314ms 13:57:46 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 13:57:46 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:57:46 kafka | [2024-01-22 13:55:49,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.320860822Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 13:57:46 kafka | [2024-01-22 13:55:49,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.322157916Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.297044ms 13:57:46 policy-db-migrator | 13:57:46 policy-pap | sasl.kerberos.service.name = null 13:57:46 kafka | [2024-01-22 13:55:49,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.325408682Z level=info msg="Executing migration" id="add column org_id in query_history_star" 13:57:46 policy-db-migrator | 13:57:46 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 13:57:46 kafka | [2024-01-22 13:55:49,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.333884795Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.541765ms 13:57:46 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 13:57:46 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 13:57:46 kafka | [2024-01-22 13:55:49,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.33937189Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | sasl.login.callback.handler.class = null 13:57:46 kafka | [2024-01-22 13:55:49,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.339468722Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=95.952µs 13:57:46 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 13:57:46 policy-pap | sasl.login.class = null 13:57:46 kafka | [2024-01-22 13:55:49,296] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,296] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.343200191Z level=info msg="Executing migration" id="create correlation table v1" 13:57:46 policy-pap | sasl.login.connect.timeout.ms = null 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,296] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.344122315Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=920.344µs 13:57:46 policy-pap | sasl.login.read.timeout.ms = null 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,296] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.348025828Z level=info msg="Executing migration" id="add index correlations.uid" 13:57:46 policy-pap | sasl.login.refresh.buffer.seconds = 300 13:57:46 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 13:57:46 kafka | [2024-01-22 13:55:49,296] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.349498797Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.473189ms 13:57:46 policy-pap | sasl.login.refresh.min.period.seconds = 60 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,299] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.35494215Z level=info msg="Executing migration" id="add index correlations.source_uid" 13:57:46 policy-pap | sasl.login.refresh.window.factor = 0.8 13:57:46 kafka | [2024-01-22 13:55:49,299] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 13:57:46 policy-pap | sasl.login.refresh.window.jitter = 0.05 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.357350633Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=2.406563ms 13:57:46 kafka | [2024-01-22 13:55:49,299] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | sasl.login.retry.backoff.max.ms = 10000 13:57:46 kafka | [2024-01-22 13:55:49,300] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.361441601Z level=info msg="Executing migration" id="add correlation config column" 13:57:46 policy-pap | sasl.login.retry.backoff.ms = 100 13:57:46 kafka | [2024-01-22 13:55:49,300] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.370152461Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.71142ms 13:57:46 policy-db-migrator | 13:57:46 policy-pap | sasl.mechanism = GSSAPI 13:57:46 kafka | [2024-01-22 13:55:49,300] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.373669613Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 13:57:46 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 13:57:46 kafka | [2024-01-22 13:55:49,300] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.374445224Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=775.771µs 13:57:46 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 13:57:46 kafka | [2024-01-22 13:55:49,300] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.380032361Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | sasl.oauthbearer.expected.audience = null 13:57:46 kafka | [2024-01-22 13:55:49,301] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.381251023Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.218502ms 13:57:46 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 13:57:46 policy-pap | sasl.oauthbearer.expected.issuer = null 13:57:46 kafka | [2024-01-22 13:55:49,301] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,301] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.384442997Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 13:57:46 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,301] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.41908962Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=34.626292ms 13:57:46 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:57:46 kafka | [2024-01-22 13:55:49,302] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.42631441Z level=info msg="Executing migration" id="create correlation v2" 13:57:46 policy-db-migrator | 13:57:46 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:57:46 kafka | [2024-01-22 13:55:49,302] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.428918839Z level=info msg="Migration successfully executed" id="create correlation v2" duration=2.435384ms 13:57:46 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 13:57:46 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 13:57:46 kafka | [2024-01-22 13:55:49,302] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.433032967Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | sasl.oauthbearer.scope.claim.name = scope 13:57:46 kafka | [2024-01-22 13:55:49,302] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.434879046Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.845149ms 13:57:46 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 13:57:46 policy-pap | sasl.oauthbearer.sub.claim.name = sub 13:57:46 kafka | [2024-01-22 13:55:49,302] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | sasl.oauthbearer.token.endpoint.url = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.442980389Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 13:57:46 policy-db-migrator | 13:57:46 policy-pap | security.protocol = PLAINTEXT 13:57:46 kafka | [2024-01-22 13:55:49,302] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.445060644Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=2.083375ms 13:57:46 policy-pap | security.providers = null 13:57:46 kafka | [2024-01-22 13:55:49,302] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.452315295Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 13:57:46 policy-pap | send.buffer.bytes = 131072 13:57:46 kafka | [2024-01-22 13:55:49,303] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 13:57:46 policy-pap | socket.connection.setup.timeout.max.ms = 30000 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.453498566Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.185011ms 13:57:46 kafka | [2024-01-22 13:55:49,303] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 policy-pap | socket.connection.setup.timeout.ms = 10000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.457592144Z level=info msg="Executing migration" id="copy correlation v1 to v2" 13:57:46 kafka | [2024-01-22 13:55:49,303] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 13:57:46 policy-pap | ssl.cipher.suites = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.457935393Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=344.209µs 13:57:46 kafka | [2024-01-22 13:55:49,303] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.46007369Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 13:57:46 kafka | [2024-01-22 13:55:49,303] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | ssl.endpoint.identification.algorithm = https 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.461262161Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.189141ms 13:57:46 kafka | [2024-01-22 13:55:49,303] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | ssl.engine.factory.class = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.466892649Z level=info msg="Executing migration" id="add provisioning column" 13:57:46 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 13:57:46 kafka | [2024-01-22 13:55:49,303] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 policy-pap | ssl.key.password = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.474705645Z level=info msg="Migration successfully executed" id="add provisioning column" duration=7.811466ms 13:57:46 kafka | [2024-01-22 13:55:49,303] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 policy-pap | ssl.keymanager.algorithm = SunX509 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.478522606Z level=info msg="Executing migration" id="create entity_events table" 13:57:46 policy-pap | ssl.keystore.certificate.chain = null 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,303] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.479359188Z level=info msg="Migration successfully executed" id="create entity_events table" duration=835.962µs 13:57:46 policy-pap | ssl.keystore.key = null 13:57:46 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 13:57:46 kafka | [2024-01-22 13:55:49,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.485442578Z level=info msg="Executing migration" id="create dashboard public config v1" 13:57:46 policy-pap | ssl.keystore.location = null 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.486433244Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=991.006µs 13:57:46 policy-pap | ssl.keystore.password = null 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.489259729Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.489718311Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 13:57:46 policy-pap | ssl.keystore.type = JKS 13:57:46 kafka | [2024-01-22 13:55:49,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.493719376Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 13:57:46 policy-pap | ssl.protocol = TLSv1.3 13:57:46 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 13:57:46 kafka | [2024-01-22 13:55:49,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.494405074Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 13:57:46 policy-pap | ssl.provider = null 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.497520446Z level=info msg="Executing migration" id="Drop old dashboard public config table" 13:57:46 policy-pap | ssl.secure.random.implementation = null 13:57:46 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 13:57:46 kafka | [2024-01-22 13:55:49,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.498393119Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=872.163µs 13:57:46 policy-pap | ssl.trustmanager.algorithm = PKIX 13:57:46 kafka | [2024-01-22 13:55:49,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.502146868Z level=info msg="Executing migration" id="recreate dashboard public config v1" 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.503118104Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=971.266µs 13:57:46 policy-pap | ssl.truststore.certificates = null 13:57:46 kafka | [2024-01-22 13:55:49,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.50562119Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 13:57:46 policy-pap | ssl.truststore.location = null 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.506708998Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.089909ms 13:57:46 policy-pap | ssl.truststore.password = null 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.511944026Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 13:57:46 policy-pap | ssl.truststore.type = JKS 13:57:46 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 13:57:46 kafka | [2024-01-22 13:55:49,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.513197219Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.252863ms 13:57:46 policy-pap | transaction.timeout.ms = 60000 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 policy-pap | transactional.id = null 13:57:46 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.520463451Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 13:57:46 kafka | [2024-01-22 13:55:49,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.521865048Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.404287ms 13:57:46 policy-pap | 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.525313858Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 13:57:46 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.526093589Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=779.801µs 13:57:46 policy-pap | [2024-01-22T13:55:48.347+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 13:57:46 kafka | [2024-01-22 13:55:49,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.530857944Z level=info msg="Executing migration" id="Drop public config table" 13:57:46 kafka | [2024-01-22 13:55:49,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 13:57:46 policy-pap | [2024-01-22T13:55:48.364+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.531425769Z level=info msg="Migration successfully executed" id="Drop public config table" duration=567.955µs 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.534749607Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 13:57:46 kafka | [2024-01-22 13:55:49,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:48.364+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.535854246Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.104099ms 13:57:46 policy-pap | [2024-01-22T13:55:48.364+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705931748364 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.541173446Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 13:57:46 policy-pap | [2024-01-22T13:55:48.365+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=2e284948-a74b-435c-88b6-e1422c57c262, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 13:57:46 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.542295886Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.12239ms 13:57:46 policy-pap | [2024-01-22T13:55:48.365+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=9af27cde-aa4f-4a25-a18a-53e421ca9375, alive=false, publisher=null]]: starting 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.698489961Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 13:57:46 policy-pap | [2024-01-22T13:55:48.365+00:00|INFO|ProducerConfig|main] ProducerConfig values: 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.700107993Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.621143ms 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.707119168Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 13:57:46 kafka | [2024-01-22 13:55:49,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 policy-pap | acks = -1 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.708196566Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.078138ms 13:57:46 policy-pap | auto.include.jmx.reporter = true 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.71288599Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 13:57:46 policy-pap | batch.size = 16384 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.752212446Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=39.321346ms 13:57:46 kafka | [2024-01-22 13:55:49,306] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 13:57:46 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:57:46 policy-pap | bootstrap.servers = [kafka:9092] 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.758296686Z level=info msg="Executing migration" id="add annotations_enabled column" 13:57:46 kafka | [2024-01-22 13:55:49,324] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | buffer.memory = 33554432 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.764991163Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.704217ms 13:57:46 kafka | [2024-01-22 13:55:49,324] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | client.dns.lookup = use_all_dns_ips 13:57:46 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.768591187Z level=info msg="Executing migration" id="add time_selection_enabled column" 13:57:46 policy-pap | client.id = producer-2 13:57:46 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 13:57:46 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.774690348Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.098611ms 13:57:46 policy-pap | compression.type = none 13:57:46 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | connections.max.idle.ms = 540000 13:57:46 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.779495905Z level=info msg="Executing migration" id="delete orphaned public dashboards" 13:57:46 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.779830844Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=335.488µs 13:57:46 policy-pap | delivery.timeout.ms = 120000 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.783418498Z level=info msg="Executing migration" id="add share column" 13:57:46 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.793055652Z level=info msg="Migration successfully executed" id="add share column" duration=9.636074ms 13:57:46 policy-pap | enable.idempotence = true 13:57:46 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.796143743Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 13:57:46 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.796375799Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=230.666µs 13:57:46 policy-pap | interceptor.classes = [] 13:57:46 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.800031496Z level=info msg="Executing migration" id="create file table" 13:57:46 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.800767745Z level=info msg="Migration successfully executed" id="create file table" duration=736.019µs 13:57:46 policy-pap | linger.ms = 0 13:57:46 policy-db-migrator | 13:57:46 policy-pap | max.block.ms = 60000 13:57:46 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.80360676Z level=info msg="Executing migration" id="file table idx: path natural pk" 13:57:46 policy-pap | max.in.flight.requests.per.connection = 5 13:57:46 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | max.request.size = 1048576 13:57:46 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.805232943Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.625253ms 13:57:46 policy-pap | metadata.max.age.ms = 300000 13:57:46 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 13:57:46 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 13:57:46 policy-pap | metadata.max.idle.ms = 300000 13:57:46 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.809452704Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.810574803Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.122819ms 13:57:46 policy-pap | metric.reporters = [] 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.815069372Z level=info msg="Executing migration" id="create file_meta table" 13:57:46 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:57:46 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | metrics.num.samples = 2 13:57:46 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.815824032Z level=info msg="Migration successfully executed" id="create file_meta table" duration=754.86µs 13:57:46 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | metrics.recording.level = INFO 13:57:46 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 13:57:46 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.81992096Z level=info msg="Executing migration" id="file table idx: path key" 13:57:46 policy-pap | metrics.sample.window.ms = 30000 13:57:46 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.821175893Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.252783ms 13:57:46 policy-pap | partitioner.adaptive.partitioning.enable = true 13:57:46 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.826605316Z level=info msg="Executing migration" id="set path collation in file table" 13:57:46 policy-pap | partitioner.availability.timeout.ms = 0 13:57:46 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:57:46 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.826690718Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=89.482µs 13:57:46 policy-pap | partitioner.class = null 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,326] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 13:57:46 policy-pap | partitioner.ignore.keys = false 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.830763005Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 13:57:46 policy-pap | receive.buffer.bytes = 32768 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,326] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 13:57:46 policy-pap | reconnect.backoff.max.ms = 1000 13:57:46 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.830839657Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=75.452µs 13:57:46 kafka | [2024-01-22 13:55:49,326] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 13:57:46 policy-pap | reconnect.backoff.ms = 50 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.835299365Z level=info msg="Executing migration" id="managed permissions migration" 13:57:46 kafka | [2024-01-22 13:55:49,326] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 13:57:46 policy-pap | request.timeout.ms = 30000 13:57:46 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.835850659Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=551.114µs 13:57:46 kafka | [2024-01-22 13:55:49,326] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 13:57:46 policy-pap | retries = 2147483647 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.840482511Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 13:57:46 kafka | [2024-01-22 13:55:49,326] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 13:57:46 policy-pap | retry.backoff.ms = 100 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.840685797Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=202.516µs 13:57:46 kafka | [2024-01-22 13:55:49,326] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 13:57:46 policy-pap | sasl.client.callback.handler.class = null 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.845443362Z level=info msg="Executing migration" id="RBAC action name migrator" 13:57:46 kafka | [2024-01-22 13:55:49,326] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 13:57:46 policy-pap | sasl.jaas.config = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.846439748Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=998.686µs 13:57:46 kafka | [2024-01-22 13:55:49,326] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 13:57:46 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.850834634Z level=info msg="Executing migration" id="Add UID column to playlist" 13:57:46 kafka | [2024-01-22 13:55:49,326] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 13:57:46 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.860790856Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.954882ms 13:57:46 kafka | [2024-01-22 13:55:49,326] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.864446853Z level=info msg="Executing migration" id="Update uid column values in playlist" 13:57:46 kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 13:57:46 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 13:57:46 kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 13:57:46 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.864637338Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=191.395µs 13:57:46 kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | sasl.kerberos.service.name = null 13:57:46 kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.867562785Z level=info msg="Executing migration" id="Add index for uid in playlist" 13:57:46 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 13:57:46 kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.868659064Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.096399ms 13:57:46 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.872411483Z level=info msg="Executing migration" id="update group index for alert rules" 13:57:46 policy-pap | sasl.login.callback.handler.class = null 13:57:46 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 13:57:46 kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.872796463Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=385.511µs 13:57:46 policy-pap | sasl.login.class = null 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.876948672Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 13:57:46 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:57:46 kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.877153107Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=200.935µs 13:57:46 kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 13:57:46 policy-pap | sasl.login.connect.timeout.ms = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.882117298Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 13:57:46 policy-pap | sasl.login.read.timeout.ms = null 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.88256762Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=452.152µs 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.886593156Z level=info msg="Executing migration" id="add action column to seed_assignment" 13:57:46 kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 13:57:46 policy-pap | sasl.login.refresh.buffer.seconds = 300 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.90080247Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=14.205924ms 13:57:46 policy-pap | sasl.login.refresh.min.period.seconds = 60 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 13:57:46 policy-pap | sasl.login.refresh.window.factor = 0.8 13:57:46 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.904849497Z level=info msg="Executing migration" id="add scope column to seed_assignment" 13:57:46 kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 13:57:46 policy-pap | sasl.login.refresh.window.jitter = 0.05 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 13:57:46 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.916394901Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=11.544874ms 13:57:46 policy-pap | sasl.login.retry.backoff.max.ms = 10000 13:57:46 kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | sasl.login.retry.backoff.ms = 100 13:57:46 kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-db-migrator | 13:57:46 policy-pap | sasl.mechanism = GSSAPI 13:57:46 kafka | [2024-01-22 13:55:49,329] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 13:57:46 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 13:57:46 kafka | [2024-01-22 13:55:49,329] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) 13:57:46 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.920932121Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 13:57:46 policy-pap | sasl.oauthbearer.expected.audience = null 13:57:46 kafka | [2024-01-22 13:55:49,337] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.922151453Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.219012ms 13:57:46 policy-pap | sasl.oauthbearer.expected.issuer = null 13:57:46 kafka | [2024-01-22 13:55:49,338] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 13:57:46 kafka | [2024-01-22 13:55:49,339] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:09.92620647Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,339] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,339] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.051176416Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=124.966476ms 13:57:46 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,350] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.064630829Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 13:57:46 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 13:57:46 kafka | [2024-01-22 13:55:49,351] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 13:57:46 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.067262448Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=2.591258ms 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,351] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 13:57:46 policy-pap | sasl.oauthbearer.scope.claim.name = scope 13:57:46 kafka | [2024-01-22 13:55:49,351] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.072231799Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 13:57:46 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 13:57:46 policy-pap | sasl.oauthbearer.sub.claim.name = sub 13:57:46 kafka | [2024-01-22 13:55:49,351] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.074238551Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=2.006962ms 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | sasl.oauthbearer.token.endpoint.url = null 13:57:46 kafka | [2024-01-22 13:55:49,359] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.077925198Z level=info msg="Executing migration" id="add primary key to seed_assigment" 13:57:46 policy-db-migrator | 13:57:46 policy-pap | security.protocol = PLAINTEXT 13:57:46 kafka | [2024-01-22 13:55:49,360] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | security.providers = null 13:57:46 kafka | [2024-01-22 13:55:49,360] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 13:57:46 policy-pap | send.buffer.bytes = 131072 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.116800967Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=38.874249ms 13:57:46 policy-db-migrator | > upgrade 0100-pdp.sql 13:57:46 kafka | [2024-01-22 13:55:49,360] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-pap | socket.connection.setup.timeout.max.ms = 30000 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.126902712Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,361] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.127599561Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=695.478µs 13:57:46 kafka | [2024-01-22 13:55:49,370] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-pap | socket.connection.setup.timeout.ms = 10000 13:57:46 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.132827668Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 13:57:46 kafka | [2024-01-22 13:55:49,371] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-pap | ssl.cipher.suites = null 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.13329017Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=461.982µs 13:57:46 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,371] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.136401911Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 13:57:46 policy-pap | ssl.endpoint.identification.algorithm = https 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,371] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-pap | ssl.engine.factory.class = null 13:57:46 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.13674009Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=337.889µs 13:57:46 kafka | [2024-01-22 13:55:49,371] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-pap | ssl.key.password = null 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.140105439Z level=info msg="Executing migration" id="create folder table" 13:57:46 kafka | [2024-01-22 13:55:49,379] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-pap | ssl.keymanager.algorithm = SunX509 13:57:46 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.141194467Z level=info msg="Migration successfully executed" id="create folder table" duration=1.088508ms 13:57:46 policy-pap | ssl.keystore.certificate.chain = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.147706148Z level=info msg="Executing migration" id="Add index for parent_uid" 13:57:46 kafka | [2024-01-22 13:55:49,380] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | ssl.keystore.key = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.149733711Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=2.024093ms 13:57:46 kafka | [2024-01-22 13:55:49,380] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | ssl.keystore.location = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.154912877Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 13:57:46 kafka | [2024-01-22 13:55:49,380] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | ssl.keystore.password = null 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.15654173Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.627823ms 13:57:46 kafka | [2024-01-22 13:55:49,380] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 13:57:46 policy-pap | ssl.keystore.type = JKS 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.161433238Z level=info msg="Executing migration" id="Update folder title length" 13:57:46 kafka | [2024-01-22 13:55:49,389] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | ssl.protocol = TLSv1.3 13:57:46 kafka | [2024-01-22 13:55:49,390] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.161458119Z level=info msg="Migration successfully executed" id="Update folder title length" duration=24.481µs 13:57:46 policy-pap | ssl.provider = null 13:57:46 kafka | [2024-01-22 13:55:49,390] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.173038392Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 13:57:46 policy-pap | ssl.secure.random.implementation = null 13:57:46 kafka | [2024-01-22 13:55:49,390] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.175152658Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=2.113926ms 13:57:46 policy-pap | ssl.trustmanager.algorithm = PKIX 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.183021104Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 13:57:46 kafka | [2024-01-22 13:55:49,390] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.184413621Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.391977ms 13:57:46 policy-pap | ssl.truststore.certificates = null 13:57:46 kafka | [2024-01-22 13:55:49,401] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.191735363Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 13:57:46 policy-pap | ssl.truststore.location = null 13:57:46 kafka | [2024-01-22 13:55:49,403] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.192941164Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.205651ms 13:57:46 policy-pap | ssl.truststore.password = null 13:57:46 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.201059957Z level=info msg="Executing migration" id="create anon_device table" 13:57:46 policy-pap | ssl.truststore.type = JKS 13:57:46 kafka | [2024-01-22 13:55:49,403] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.20193567Z level=info msg="Migration successfully executed" id="create anon_device table" duration=875.803µs 13:57:46 policy-pap | transaction.timeout.ms = 60000 13:57:46 kafka | [2024-01-22 13:55:49,403] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.211922702Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 13:57:46 policy-pap | transactional.id = null 13:57:46 kafka | [2024-01-22 13:55:49,403] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.213485073Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.560951ms 13:57:46 kafka | [2024-01-22 13:55:49,409] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.223827424Z level=info msg="Executing migration" id="add index anon_device.updated_at" 13:57:46 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 13:57:46 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.225055786Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.228312ms 13:57:46 policy-pap | 13:57:46 kafka | [2024-01-22 13:55:49,409] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.232846611Z level=info msg="Executing migration" id="create signing_key table" 13:57:46 kafka | [2024-01-22 13:55:49,410] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.234568946Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.719175ms 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:55:48.366+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 13:57:46 kafka | [2024-01-22 13:55:49,410] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:55:48.369+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.245983005Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 13:57:46 kafka | [2024-01-22 13:55:49,410] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:55:48.369+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.247137496Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.155401ms 13:57:46 kafka | [2024-01-22 13:55:49,416] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 13:57:46 policy-pap | [2024-01-22T13:55:48.369+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705931748369 13:57:46 kafka | [2024-01-22 13:55:49,416] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | -------------- 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.257020395Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 13:57:46 policy-pap | [2024-01-22T13:55:48.369+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=9af27cde-aa4f-4a25-a18a-53e421ca9375, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,416] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.258413261Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.393076ms 13:57:46 policy-pap | [2024-01-22T13:55:48.369+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,416] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.265649401Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 13:57:46 policy-pap | [2024-01-22T13:55:48.369+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 13:57:46 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 13:57:46 kafka | [2024-01-22 13:55:49,417] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.265952519Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=303.148µs 13:57:46 policy-pap | [2024-01-22T13:55:48.372+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,423] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.268801114Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 13:57:46 policy-pap | [2024-01-22T13:55:48.372+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 13:57:46 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 13:57:46 kafka | [2024-01-22 13:55:49,423] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.278612051Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.811417ms 13:57:46 policy-pap | [2024-01-22T13:55:48.375+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,423] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.283967561Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 13:57:46 policy-pap | [2024-01-22T13:55:48.375+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,423] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.284604738Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=638.997µs 13:57:46 policy-pap | [2024-01-22T13:55:48.375+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,423] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.288582522Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 13:57:46 policy-pap | [2024-01-22T13:55:48.376+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 13:57:46 kafka | [2024-01-22 13:55:49,430] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.289740003Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.157601ms 13:57:46 policy-pap | [2024-01-22T13:55:48.380+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.293973744Z level=info msg="Executing migration" id="create sso_setting table" 13:57:46 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 13:57:46 kafka | [2024-01-22 13:55:49,430] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-pap | [2024-01-22T13:55:48.378+00:00|INFO|TimerManager|Thread-9] timer manager update started 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.294885568Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=911.614µs 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,430] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 13:57:46 policy-pap | [2024-01-22T13:55:48.381+00:00|INFO|ServiceManager|main] Policy PAP started 13:57:46 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 13:57:46 kafka | [2024-01-22 13:55:49,431] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.299794447Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 13:57:46 policy-pap | [2024-01-22T13:55:48.383+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 11.156 seconds (process running for 11.818) 13:57:46 kafka | [2024-01-22 13:55:49,431] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.300511915Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=718.069µs 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:55:48.896+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: YXDHh3LaSIyP8FezJr0IvQ 13:57:46 kafka | [2024-01-22 13:55:49,438] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.303279058Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:55:48.896+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: YXDHh3LaSIyP8FezJr0IvQ 13:57:46 kafka | [2024-01-22 13:55:49,439] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.303539515Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=260.677µs 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:55:48.902+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 13:57:46 kafka | [2024-01-22 13:55:49,439] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 13:57:46 grafana | logger=migrator t=2024-01-22T13:55:10.306201165Z level=info msg="migrations completed" performed=523 skipped=0 duration=4.454310929s 13:57:46 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 13:57:46 policy-pap | [2024-01-22T13:55:48.902+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Cluster ID: YXDHh3LaSIyP8FezJr0IvQ 13:57:46 kafka | [2024-01-22 13:55:49,439] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 grafana | logger=sqlstore t=2024-01-22T13:55:10.315144169Z level=info msg="Created default admin" user=admin 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:55:48.967+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 13:57:46 kafka | [2024-01-22 13:55:49,439] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 grafana | logger=sqlstore t=2024-01-22T13:55:10.315417616Z level=info msg="Created default organization" 13:57:46 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 13:57:46 policy-pap | [2024-01-22T13:55:48.968+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 13:57:46 kafka | [2024-01-22 13:55:49,445] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 grafana | logger=secrets t=2024-01-22T13:55:10.320687234Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 13:57:46 policy-db-migrator | JOIN pdpstatistics b 13:57:46 policy-pap | [2024-01-22T13:55:48.987+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:57:46 kafka | [2024-01-22 13:55:49,446] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 grafana | logger=plugin.store t=2024-01-22T13:55:10.337409003Z level=info msg="Loading plugins..." 13:57:46 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 13:57:46 policy-pap | [2024-01-22T13:55:48.987+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: YXDHh3LaSIyP8FezJr0IvQ 13:57:46 kafka | [2024-01-22 13:55:49,446] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 13:57:46 grafana | logger=local.finder t=2024-01-22T13:55:10.373310644Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 13:57:46 policy-db-migrator | SET a.id = b.id 13:57:46 policy-pap | [2024-01-22T13:55:49.029+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:57:46 kafka | [2024-01-22 13:55:49,446] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 grafana | logger=plugin.store t=2024-01-22T13:55:10.373364376Z level=info msg="Plugins loaded" count=55 duration=35.955923ms 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:55:49.134+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 13:57:46 kafka | [2024-01-22 13:55:49,446] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 grafana | logger=query_data t=2024-01-22T13:55:10.376822017Z level=info msg="Query Service initialization" 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:55:49.153+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:57:46 kafka | [2024-01-22 13:55:49,639] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 grafana | logger=live.push_http t=2024-01-22T13:55:10.390796243Z level=info msg="Live Push Gateway initialization" 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,640] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-pap | [2024-01-22T13:55:49.261+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:57:46 grafana | logger=ngalert.migration t=2024-01-22T13:55:10.396152844Z level=info msg=Starting 13:57:46 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 13:57:46 kafka | [2024-01-22 13:55:49,640] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 13:57:46 policy-pap | [2024-01-22T13:55:49.285+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 13:57:46 grafana | logger=ngalert.migration orgID=1 t=2024-01-22T13:55:10.396823761Z level=info msg="Migrating alerts for organisation" 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,640] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-pap | [2024-01-22T13:55:50.482+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 13:57:46 grafana | logger=ngalert.migration orgID=1 t=2024-01-22T13:55:10.397111209Z level=info msg="Alerts found to migrate" alerts=0 13:57:46 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 13:57:46 kafka | [2024-01-22 13:55:49,640] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:50.488+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 13:57:46 grafana | logger=ngalert.migration orgID=1 t=2024-01-22T13:55:10.397408026Z level=warn msg="No available receivers" 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,649] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-pap | [2024-01-22T13:55:50.504+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 13:57:46 grafana | logger=ngalert.migration CurrentType=Legacy DesiredType=UnifiedAlerting CleanOnDowngrade=false CleanOnUpgrade=false t=2024-01-22T13:55:10.400157809Z level=info msg="Completed legacy migration" 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,650] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-pap | [2024-01-22T13:55:50.509+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] (Re-)joining group 13:57:46 grafana | logger=infra.usagestats.collector t=2024-01-22T13:55:10.495515589Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,650] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 13:57:46 policy-pap | [2024-01-22T13:55:50.517+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-e9c969ef-6ee6-4673-ad81-f22b62d5e7d7 13:57:46 grafana | logger=provisioning.datasources t=2024-01-22T13:55:10.498240121Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 13:57:46 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 13:57:46 kafka | [2024-01-22 13:55:49,650] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 grafana | logger=provisioning.alerting t=2024-01-22T13:55:10.511323074Z level=info msg="starting to provision alerting" 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:55:50.517+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 13:57:46 kafka | [2024-01-22 13:55:49,650] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 13:57:46 policy-pap | [2024-01-22T13:55:50.517+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 13:57:46 grafana | logger=provisioning.alerting t=2024-01-22T13:55:10.511355205Z level=info msg="finished to provision alerting" 13:57:46 kafka | [2024-01-22 13:55:49,666] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:55:50.520+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Request joining group due to: need to re-join with the given member-id: consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3-90b7f53c-d18d-432e-90f6-302f414d9a2e 13:57:46 grafana | logger=ngalert.state.manager t=2024-01-22T13:55:10.511643692Z level=info msg="Warming state cache for startup" 13:57:46 kafka | [2024-01-22 13:55:49,667] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:55:50.520+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 13:57:46 grafana | logger=ngalert.multiorg.alertmanager t=2024-01-22T13:55:10.511707334Z level=info msg="Starting MultiOrg Alertmanager" 13:57:46 kafka | [2024-01-22 13:55:49,668] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 13:57:46 policy-pap | [2024-01-22T13:55:50.520+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] (Re-)joining group 13:57:46 grafana | logger=ngalert.state.manager t=2024-01-22T13:55:10.512125375Z level=info msg="State cache has been initialized" states=0 duration=480.233µs 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:55:53.545+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-e9c969ef-6ee6-4673-ad81-f22b62d5e7d7', protocol='range'} 13:57:46 kafka | [2024-01-22 13:55:49,669] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 grafana | logger=ngalert.scheduler t=2024-01-22T13:55:10.512172946Z level=info msg="Starting scheduler" tickInterval=10s 13:57:46 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 13:57:46 policy-pap | [2024-01-22T13:55:53.547+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Successfully joined group with generation Generation{generationId=1, memberId='consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3-90b7f53c-d18d-432e-90f6-302f414d9a2e', protocol='range'} 13:57:46 kafka | [2024-01-22 13:55:49,669] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:55:53.553+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Finished assignment for group at generation 1: {consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3-90b7f53c-d18d-432e-90f6-302f414d9a2e=Assignment(partitions=[policy-pdp-pap-0])} 13:57:46 grafana | logger=ticker t=2024-01-22T13:55:10.512243418Z level=info msg=starting first_tick=2024-01-22T13:55:20Z 13:57:46 kafka | [2024-01-22 13:55:49,681] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-pap | [2024-01-22T13:55:53.554+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-e9c969ef-6ee6-4673-ad81-f22b62d5e7d7=Assignment(partitions=[policy-pdp-pap-0])} 13:57:46 grafana | logger=http.server t=2024-01-22T13:55:10.514105877Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 13:57:46 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 13:57:46 kafka | [2024-01-22 13:55:49,683] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-pap | [2024-01-22T13:55:53.610+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Successfully synced group in generation Generation{generationId=1, memberId='consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3-90b7f53c-d18d-432e-90f6-302f414d9a2e', protocol='range'} 13:57:46 grafana | logger=grafanaStorageLogger t=2024-01-22T13:55:10.515531404Z level=info msg="Storage starting" 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,683] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 13:57:46 policy-pap | [2024-01-22T13:55:53.611+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 13:57:46 grafana | logger=grafana.update.checker t=2024-01-22T13:55:10.551189049Z level=info msg="Update check succeeded" duration=35.884781ms 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,683] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-pap | [2024-01-22T13:55:53.612+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-e9c969ef-6ee6-4673-ad81-f22b62d5e7d7', protocol='range'} 13:57:46 grafana | logger=sqlstore.transactions t=2024-01-22T13:55:10.587311377Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,684] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:53.613+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 13:57:46 grafana | logger=plugins.update.checker t=2024-01-22T13:55:10.59734089Z level=info msg="Update check succeeded" duration=82.044212ms 13:57:46 policy-db-migrator | > upgrade 0210-sequence.sql 13:57:46 kafka | [2024-01-22 13:55:49,697] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-pap | [2024-01-22T13:55:53.617+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 13:57:46 grafana | logger=sqlstore.transactions t=2024-01-22T13:55:10.606675295Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,698] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-pap | [2024-01-22T13:55:53.623+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Adding newly assigned partitions: policy-pdp-pap-0 13:57:46 grafana | logger=sqlstore.transactions t=2024-01-22T13:55:10.617859558Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 code="database is locked" 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 13:57:46 kafka | [2024-01-22 13:55:49,698] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 13:57:46 policy-pap | [2024-01-22T13:55:53.654+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Found no committed offset for partition policy-pdp-pap-0 13:57:46 grafana | logger=sqlstore.transactions t=2024-01-22T13:55:10.669096692Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,698] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-pap | [2024-01-22T13:55:53.655+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 13:57:46 grafana | logger=infra.usagestats t=2024-01-22T13:56:30.526440179Z level=info msg="Usage stats are ready to report" 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,698] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:55:53.678+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,711] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-pap | [2024-01-22T13:55:53.678+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 13:57:46 policy-db-migrator | > upgrade 0220-sequence.sql 13:57:46 kafka | [2024-01-22 13:55:49,712] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-pap | [2024-01-22T13:55:54.915+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-4] Initializing Spring DispatcherServlet 'dispatcherServlet' 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,712] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 13:57:46 policy-pap | [2024-01-22T13:55:54.915+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Initializing Servlet 'dispatcherServlet' 13:57:46 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 13:57:46 kafka | [2024-01-22 13:55:49,712] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-pap | [2024-01-22T13:55:54.918+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Completed initialization in 3 ms 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,712] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:56:09.928+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,729] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-pap | [] 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,730] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 13:57:46 kafka | [2024-01-22 13:55:49,730] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 13:57:46 policy-pap | [2024-01-22T13:56:09.929+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,730] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c841c6d7-f089-47c8-88dd-5f2b47d779a1","timestampMs":1705931769892,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup"} 13:57:46 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 13:57:46 kafka | [2024-01-22 13:55:49,730] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:56:09.929+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,742] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c841c6d7-f089-47c8-88dd-5f2b47d779a1","timestampMs":1705931769892,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup"} 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,743] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-pap | [2024-01-22T13:56:09.938+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,743] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 13:57:46 policy-pap | [2024-01-22T13:56:10.043+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate starting 13:57:46 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 13:57:46 kafka | [2024-01-22 13:55:49,743] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-pap | [2024-01-22T13:56:10.043+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate starting listener 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,743] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:56:10.043+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate starting timer 13:57:46 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 13:57:46 kafka | [2024-01-22 13:55:49,759] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-pap | [2024-01-22T13:56:10.043+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=f41ee548-273a-4dd1-a197-a877ac7fd0e5, expireMs=1705931800043] 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,760] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-pap | [2024-01-22T13:56:10.045+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=f41ee548-273a-4dd1-a197-a877ac7fd0e5, expireMs=1705931800043] 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,760] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 13:57:46 policy-pap | [2024-01-22T13:56:10.045+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate starting enqueue 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,760] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-pap | [2024-01-22T13:56:10.045+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate started 13:57:46 policy-db-migrator | > upgrade 0120-toscatrigger.sql 13:57:46 kafka | [2024-01-22 13:55:49,760] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:56:10.046+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,774] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-pap | {"source":"pap-62c94b3b-66f9-4964-a14e-729bc5920807","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f41ee548-273a-4dd1-a197-a877ac7fd0e5","timestampMs":1705931770028,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:57:46 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 13:57:46 kafka | [2024-01-22 13:55:49,774] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-pap | [2024-01-22T13:56:10.083+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,774] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 13:57:46 policy-pap | {"source":"pap-62c94b3b-66f9-4964-a14e-729bc5920807","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f41ee548-273a-4dd1-a197-a877ac7fd0e5","timestampMs":1705931770028,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:10.083+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 13:57:46 kafka | [2024-01-22 13:55:49,774] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:10.084+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:57:46 kafka | [2024-01-22 13:55:49,775] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 13:57:46 policy-pap | {"source":"pap-62c94b3b-66f9-4964-a14e-729bc5920807","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f41ee548-273a-4dd1-a197-a877ac7fd0e5","timestampMs":1705931770028,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:57:46 kafka | [2024-01-22 13:55:49,787] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:10.084+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 13:57:46 kafka | [2024-01-22 13:55:49,789] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 13:57:46 policy-pap | [2024-01-22T13:56:10.105+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 13:57:46 kafka | [2024-01-22 13:55:49,789] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"5cc52511-b48b-4fa0-a6e6-c267e358d5d1","timestampMs":1705931770093,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup"} 13:57:46 kafka | [2024-01-22 13:55:49,789] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:10.108+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:57:46 kafka | [2024-01-22 13:55:49,789] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"5cc52511-b48b-4fa0-a6e6-c267e358d5d1","timestampMs":1705931770093,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup"} 13:57:46 kafka | [2024-01-22 13:55:49,842] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | > upgrade 0140-toscaparameter.sql 13:57:46 policy-pap | [2024-01-22T13:56:10.109+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 13:57:46 kafka | [2024-01-22 13:55:49,843] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:10.116+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:57:46 kafka | [2024-01-22 13:55:49,844] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 13:57:46 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f41ee548-273a-4dd1-a197-a877ac7fd0e5","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"29114705-8887-4977-854d-e6da3f475b73","timestampMs":1705931770095,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:57:46 kafka | [2024-01-22 13:55:49,844] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:10.134+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate stopping 13:57:46 kafka | [2024-01-22 13:55:49,844] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:10.135+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate stopping enqueue 13:57:46 kafka | [2024-01-22 13:55:49,855] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:10.135+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate stopping timer 13:57:46 kafka | [2024-01-22 13:55:49,856] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | > upgrade 0150-toscaproperty.sql 13:57:46 policy-pap | [2024-01-22T13:56:10.135+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=f41ee548-273a-4dd1-a197-a877ac7fd0e5, expireMs=1705931800043] 13:57:46 kafka | [2024-01-22 13:55:49,856] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:10.135+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate stopping listener 13:57:46 kafka | [2024-01-22 13:55:49,857] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 13:57:46 policy-pap | [2024-01-22T13:56:10.135+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate stopped 13:57:46 kafka | [2024-01-22 13:55:49,857] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:10.138+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 13:57:46 kafka | [2024-01-22 13:55:49,865] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f41ee548-273a-4dd1-a197-a877ac7fd0e5","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"29114705-8887-4977-854d-e6da3f475b73","timestampMs":1705931770095,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:57:46 kafka | [2024-01-22 13:55:49,867] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 13:57:46 policy-pap | [2024-01-22T13:56:10.138+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id f41ee548-273a-4dd1-a197-a877ac7fd0e5 13:57:46 kafka | [2024-01-22 13:55:49,867] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:10.148+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate successful 13:57:46 kafka | [2024-01-22 13:55:49,867] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:10.148+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 start publishing next request 13:57:46 kafka | [2024-01-22 13:55:49,868] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:10.148+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpStateChange starting 13:57:46 kafka | [2024-01-22 13:55:49,876] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 13:57:46 policy-pap | [2024-01-22T13:56:10.148+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpStateChange starting listener 13:57:46 kafka | [2024-01-22 13:55:49,877] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:10.149+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpStateChange starting timer 13:57:46 kafka | [2024-01-22 13:55:49,877] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:10.149+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=1e11feec-3c6a-4861-a178-a1d471866c80, expireMs=1705931800149] 13:57:46 kafka | [2024-01-22 13:55:49,877] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:10.149+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpStateChange starting enqueue 13:57:46 kafka | [2024-01-22 13:55:49,877] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 13:57:46 kafka | [2024-01-22 13:55:49,888] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-pap | [2024-01-22T13:56:10.149+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpStateChange started 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,889] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-pap | [2024-01-22T13:56:10.149+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=1e11feec-3c6a-4861-a178-a1d471866c80, expireMs=1705931800149] 13:57:46 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 13:57:46 kafka | [2024-01-22 13:55:49,889] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 13:57:46 policy-pap | [2024-01-22T13:56:10.149+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,889] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-pap | {"source":"pap-62c94b3b-66f9-4964-a14e-729bc5920807","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"1e11feec-3c6a-4861-a178-a1d471866c80","timestampMs":1705931770028,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:57:46 kafka | [2024-01-22 13:55:49,890] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:56:10.160+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:49,896] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-pap | {"source":"pap-62c94b3b-66f9-4964-a14e-729bc5920807","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"1e11feec-3c6a-4861-a178-a1d471866c80","timestampMs":1705931770028,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:49,896] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-pap | [2024-01-22T13:56:10.160+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 13:57:46 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 13:57:46 kafka | [2024-01-22 13:55:49,896] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 13:57:46 policy-pap | [2024-01-22T13:56:10.174+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"1e11feec-3c6a-4861-a178-a1d471866c80","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"029c38c3-9b61-40d5-84a1-9b0554ef65e1","timestampMs":1705931770162,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:57:46 kafka | [2024-01-22 13:55:49,897] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:10.175+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 1e11feec-3c6a-4861-a178-a1d471866c80 13:57:46 kafka | [2024-01-22 13:55:49,897] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:10.199+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:57:46 kafka | [2024-01-22 13:55:49,909] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 13:57:46 policy-pap | {"source":"pap-62c94b3b-66f9-4964-a14e-729bc5920807","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"1e11feec-3c6a-4861-a178-a1d471866c80","timestampMs":1705931770028,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:57:46 kafka | [2024-01-22 13:55:49,910] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:10.199+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 13:57:46 kafka | [2024-01-22 13:55:49,910] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 13:57:46 policy-pap | [2024-01-22T13:56:10.204+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:57:46 kafka | [2024-01-22 13:55:49,911] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"1e11feec-3c6a-4861-a178-a1d471866c80","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"029c38c3-9b61-40d5-84a1-9b0554ef65e1","timestampMs":1705931770162,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:57:46 kafka | [2024-01-22 13:55:49,911] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpStateChange stopping 13:57:46 kafka | [2024-01-22 13:55:50,163] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpStateChange stopping enqueue 13:57:46 kafka | [2024-01-22 13:55:50,164] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 13:57:46 policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpStateChange stopping timer 13:57:46 kafka | [2024-01-22 13:55:50,164] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=1e11feec-3c6a-4861-a178-a1d471866c80, expireMs=1705931800149] 13:57:46 kafka | [2024-01-22 13:55:50,165] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpStateChange stopping listener 13:57:46 kafka | [2024-01-22 13:55:50,165] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpStateChange stopped 13:57:46 kafka | [2024-01-22 13:55:50,175] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 13:57:46 policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpStateChange successful 13:57:46 kafka | [2024-01-22 13:55:50,176] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 start publishing next request 13:57:46 kafka | [2024-01-22 13:55:50,176] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 13:57:46 policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate starting 13:57:46 kafka | [2024-01-22 13:55:50,176] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate starting listener 13:57:46 kafka | [2024-01-22 13:55:50,176] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate starting timer 13:57:46 kafka | [2024-01-22 13:55:50,187] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=8cc3dde7-8a50-459b-a008-976e7631331f, expireMs=1705931800205] 13:57:46 kafka | [2024-01-22 13:55:50,189] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | > upgrade 0100-upgrade.sql 13:57:46 policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate starting enqueue 13:57:46 kafka | [2024-01-22 13:55:50,189] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate started 13:57:46 kafka | [2024-01-22 13:55:50,190] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | select 'upgrade to 1100 completed' as msg 13:57:46 policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 13:57:46 kafka | [2024-01-22 13:55:50,190] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | {"source":"pap-62c94b3b-66f9-4964-a14e-729bc5920807","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8cc3dde7-8a50-459b-a008-976e7631331f","timestampMs":1705931770192,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:57:46 kafka | [2024-01-22 13:55:50,205] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:10.215+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:57:46 kafka | [2024-01-22 13:55:50,206] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | msg 13:57:46 policy-pap | {"source":"pap-62c94b3b-66f9-4964-a14e-729bc5920807","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8cc3dde7-8a50-459b-a008-976e7631331f","timestampMs":1705931770192,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:57:46 kafka | [2024-01-22 13:55:50,206] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | upgrade to 1100 completed 13:57:46 policy-pap | [2024-01-22T13:56:10.215+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 13:57:46 kafka | [2024-01-22 13:55:50,206] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:10.216+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 13:57:46 kafka | [2024-01-22 13:55:50,206] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 13:57:46 policy-pap | {"source":"pap-62c94b3b-66f9-4964-a14e-729bc5920807","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8cc3dde7-8a50-459b-a008-976e7631331f","timestampMs":1705931770192,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:57:46 kafka | [2024-01-22 13:55:50,218] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:10.216+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 13:57:46 kafka | [2024-01-22 13:55:50,219] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 13:57:46 policy-pap | [2024-01-22T13:56:10.226+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 13:57:46 kafka | [2024-01-22 13:55:50,219] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"8cc3dde7-8a50-459b-a008-976e7631331f","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"49cfa103-b65d-4118-9a66-07d688cd7200","timestampMs":1705931770218,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:57:46 kafka | [2024-01-22 13:55:50,219] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:10.227+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 13:57:46 kafka | [2024-01-22 13:55:50,219] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"8cc3dde7-8a50-459b-a008-976e7631331f","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"49cfa103-b65d-4118-9a66-07d688cd7200","timestampMs":1705931770218,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 13:57:46 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 13:57:46 kafka | [2024-01-22 13:55:50,226] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:50,227] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-pap | [2024-01-22T13:56:10.227+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate stopping 13:57:46 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 13:57:46 kafka | [2024-01-22 13:55:50,227] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 13:57:46 policy-pap | [2024-01-22T13:56:10.227+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate stopping enqueue 13:57:46 kafka | [2024-01-22 13:55:50,227] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-db-migrator | 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:10.227+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate stopping timer 13:57:46 kafka | [2024-01-22 13:55:50,227] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-pap | [2024-01-22T13:56:10.227+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=8cc3dde7-8a50-459b-a008-976e7631331f, expireMs=1705931800205] 13:57:46 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 13:57:46 kafka | [2024-01-22 13:55:50,236] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:10.227+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate stopping listener 13:57:46 kafka | [2024-01-22 13:55:50,236] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:10.227+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate stopped 13:57:46 kafka | [2024-01-22 13:55:50,236] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:10.227+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 8cc3dde7-8a50-459b-a008-976e7631331f 13:57:46 kafka | [2024-01-22 13:55:50,236] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | > upgrade 0120-audit_sequence.sql 13:57:46 policy-pap | [2024-01-22T13:56:10.231+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate successful 13:57:46 kafka | [2024-01-22 13:55:50,236] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:10.231+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 has no more requests 13:57:46 kafka | [2024-01-22 13:55:50,244] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 13:57:46 policy-pap | [2024-01-22T13:56:15.766+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 13:57:46 kafka | [2024-01-22 13:55:50,244] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:15.775+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 13:57:46 kafka | [2024-01-22 13:55:50,245] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:16.175+00:00|INFO|SessionData|http-nio-6969-exec-8] unknown group testGroup 13:57:46 kafka | [2024-01-22 13:55:50,245] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:16.735+00:00|INFO|SessionData|http-nio-6969-exec-8] create cached group testGroup 13:57:46 kafka | [2024-01-22 13:55:50,245] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 13:57:46 policy-pap | [2024-01-22T13:56:16.736+00:00|INFO|SessionData|http-nio-6969-exec-8] creating DB group testGroup 13:57:46 kafka | [2024-01-22 13:55:50,254] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:17.261+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 13:57:46 kafka | [2024-01-22 13:55:50,255] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:17.574+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 13:57:46 kafka | [2024-01-22 13:55:50,255] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:17.674+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 13:57:46 kafka | [2024-01-22 13:55:50,255] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 13:57:46 policy-pap | [2024-01-22T13:56:17.674+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup 13:57:46 kafka | [2024-01-22 13:55:50,255] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:17.675+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup 13:57:46 kafka | [2024-01-22 13:55:50,267] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 13:57:46 policy-pap | [2024-01-22T13:56:17.688+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-01-22T13:56:17Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-01-22T13:56:17Z, user=policyadmin)] 13:57:46 kafka | [2024-01-22 13:55:50,268] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:18.378+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 13:57:46 kafka | [2024-01-22 13:55:50,268] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:18.379+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 13:57:46 kafka | [2024-01-22 13:55:50,268] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:18.379+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 13:57:46 kafka | [2024-01-22 13:55:50,268] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 13:57:46 policy-pap | [2024-01-22T13:56:18.379+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 13:57:46 kafka | [2024-01-22 13:55:50,275] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:18.379+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 13:57:46 kafka | [2024-01-22 13:55:50,276] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:18.391+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-01-22T13:56:18Z, user=policyadmin)] 13:57:46 kafka | [2024-01-22 13:55:50,276] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:18.771+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group defaultGroup 13:57:46 kafka | [2024-01-22 13:55:50,276] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | TRUNCATE TABLE sequence 13:57:46 policy-pap | [2024-01-22T13:56:18.771+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group testGroup 13:57:46 kafka | [2024-01-22 13:55:50,276] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:18.771+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-7] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 13:57:46 kafka | [2024-01-22 13:55:50,282] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:18.771+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 13:57:46 kafka | [2024-01-22 13:55:50,282] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 13:57:46 policy-pap | [2024-01-22T13:56:18.771+00:00|INFO|SessionData|http-nio-6969-exec-7] update cached group testGroup 13:57:46 kafka | [2024-01-22 13:55:50,283] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:18.771+00:00|INFO|SessionData|http-nio-6969-exec-7] updating DB group testGroup 13:57:46 kafka | [2024-01-22 13:55:50,283] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 13:57:46 policy-pap | [2024-01-22T13:56:18.783+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-01-22T13:56:18Z, user=policyadmin)] 13:57:46 kafka | [2024-01-22 13:55:50,283] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:39.341+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 13:57:46 kafka | [2024-01-22 13:55:50,290] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | 13:57:46 policy-pap | [2024-01-22T13:56:39.343+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 13:57:46 kafka | [2024-01-22 13:55:50,292] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | -------------- 13:57:46 policy-pap | [2024-01-22T13:56:40.043+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=f41ee548-273a-4dd1-a197-a877ac7fd0e5, expireMs=1705931800043] 13:57:46 kafka | [2024-01-22 13:55:50,292] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | DROP TABLE pdpstatistics 13:57:46 policy-pap | [2024-01-22T13:56:40.149+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=1e11feec-3c6a-4861-a178-a1d471866c80, expireMs=1705931800149] 13:57:46 kafka | [2024-01-22 13:55:50,292] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:50,293] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:50,314] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:50,315] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 13:57:46 kafka | [2024-01-22 13:55:50,315] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:50,315] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 13:57:46 kafka | [2024-01-22 13:55:50,315] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:50,381] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:50,381] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:50,381] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 13:57:46 kafka | [2024-01-22 13:55:50,381] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:50,382] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | DROP TABLE statistics_sequence 13:57:46 kafka | [2024-01-22 13:55:50,410] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | -------------- 13:57:46 kafka | [2024-01-22 13:55:50,410] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | 13:57:46 kafka | [2024-01-22 13:55:50,410] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | policyadmin: OK: upgrade (1300) 13:57:46 kafka | [2024-01-22 13:55:50,410] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | name version 13:57:46 kafka | [2024-01-22 13:55:50,411] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | policyadmin 1300 13:57:46 kafka | [2024-01-22 13:55:50,416] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | ID script operation from_version to_version tag success atTime 13:57:46 kafka | [2024-01-22 13:55:50,416] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:15 13:57:46 kafka | [2024-01-22 13:55:50,416] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:15 13:57:46 kafka | [2024-01-22 13:55:50,417] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:15 13:57:46 kafka | [2024-01-22 13:55:50,417] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:15 13:57:46 kafka | [2024-01-22 13:55:50,423] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:15 13:57:46 kafka | [2024-01-22 13:55:50,424] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:15 13:57:46 kafka | [2024-01-22 13:55:50,424] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:15 13:57:46 kafka | [2024-01-22 13:55:50,424] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:15 13:57:46 kafka | [2024-01-22 13:55:50,424] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:15 13:57:46 kafka | [2024-01-22 13:55:50,428] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 13:57:46 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:15 13:57:46 kafka | [2024-01-22 13:55:50,429] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 13:57:46 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:15 13:57:46 kafka | [2024-01-22 13:55:50,429] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:15 13:57:46 kafka | [2024-01-22 13:55:50,429] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 13:57:46 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 13:57:46 kafka | [2024-01-22 13:55:50,429] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 13:57:46 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 13:57:46 kafka | [2024-01-22 13:55:50,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 13:57:46 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 13:57:46 kafka | [2024-01-22 13:55:50,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 13:57:46 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 13:57:46 kafka | [2024-01-22 13:55:50,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 13:57:46 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 13:57:46 kafka | [2024-01-22 13:55:50,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 13:57:46 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 13:57:46 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 13:57:46 kafka | [2024-01-22 13:55:50,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 13:57:46 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 13:57:46 kafka | [2024-01-22 13:55:50,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 13:57:46 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 13:57:46 kafka | [2024-01-22 13:55:50,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 13:57:46 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 13:57:46 kafka | [2024-01-22 13:55:50,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 13:57:46 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 13:57:46 kafka | [2024-01-22 13:55:50,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 13:57:46 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 13:57:46 kafka | [2024-01-22 13:55:50,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 13:57:46 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 13:57:46 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 13:57:46 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 13:57:46 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 13:57:46 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 13:57:46 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 13:57:46 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 13:57:46 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 13:57:46 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 13:57:46 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 13:57:46 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 13:57:46 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 13:57:46 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 13:57:46 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 13:57:46 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 13:57:46 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 13:57:46 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 13:57:46 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 13:57:46 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 13:57:46 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 13:57:46 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 13:57:46 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 13:57:46 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 13:57:46 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 13:57:46 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 13:57:46 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 13:57:46 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 13:57:46 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 13:57:46 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 13:57:46 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 13:57:46 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 13:57:46 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 13:57:46 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 13:57:46 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 13:57:46 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 13:57:46 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 13:57:46 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 13:57:46 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 13:57:46 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 13:57:46 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 13:57:46 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 13:57:46 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 13:57:46 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 13:57:46 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 13:57:46 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 13:57:46 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 13:57:46 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 13:57:46 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 13:57:46 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 13:57:46 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 13:57:46 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 13:57:46 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 13:57:46 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 13:57:46 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 13:57:46 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 13:57:46 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 13:57:46 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 13:57:46 kafka | [2024-01-22 13:55:50,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 13:57:46 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 13:57:46 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 13:57:46 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 13:57:46 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:18 13:57:46 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:18 13:57:46 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:18 13:57:46 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:18 13:57:46 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:18 13:57:46 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:18 13:57:46 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:18 13:57:46 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:18 13:57:46 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:18 13:57:46 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:18 13:57:46 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:18 13:57:46 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:19 13:57:46 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:19 13:57:46 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:19 13:57:46 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:19 13:57:46 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:19 13:57:46 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:19 13:57:46 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:19 13:57:46 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:19 13:57:46 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:19 13:57:46 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:19 13:57:46 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 13:57:46 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 13:57:46 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 13:57:46 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 13:57:46 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 13:57:46 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 13:57:46 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 13:57:46 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 13:57:46 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 13:57:46 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 13:57:46 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 13:57:46 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 13:57:46 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 13:57:46 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 13:57:46 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 13:57:46 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 13:57:46 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 13:57:46 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:21 13:57:46 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:21 13:57:46 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 13:57:46 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 13:57:46 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 13:57:46 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 13:57:46 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 13:57:46 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 13:57:46 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 13:57:46 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 13:57:46 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 13:57:46 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 13:57:46 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 13:57:46 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 13:57:46 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 13:57:46 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2201241355151000u 1 2024-01-22 13:55:21 13:57:46 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2201241355151000u 1 2024-01-22 13:55:21 13:57:46 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2201241355151000u 1 2024-01-22 13:55:21 13:57:46 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2201241355151000u 1 2024-01-22 13:55:21 13:57:46 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2201241355151000u 1 2024-01-22 13:55:21 13:57:46 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2201241355151000u 1 2024-01-22 13:55:21 13:57:46 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2201241355151000u 1 2024-01-22 13:55:22 13:57:46 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2201241355151000u 1 2024-01-22 13:55:22 13:57:46 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2201241355151000u 1 2024-01-22 13:55:22 13:57:46 kafka | [2024-01-22 13:55:50,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,449] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,449] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,449] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,449] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,449] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,449] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,449] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,452] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,452] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,452] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,452] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2201241355151100u 1 2024-01-22 13:55:22 13:57:46 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2201241355151200u 1 2024-01-22 13:55:22 13:57:46 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2201241355151200u 1 2024-01-22 13:55:22 13:57:46 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2201241355151200u 1 2024-01-22 13:55:22 13:57:46 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2201241355151200u 1 2024-01-22 13:55:22 13:57:46 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2201241355151300u 1 2024-01-22 13:55:22 13:57:46 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2201241355151300u 1 2024-01-22 13:55:22 13:57:46 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2201241355151300u 1 2024-01-22 13:55:22 13:57:46 policy-db-migrator | policyadmin: OK @ 1300 13:57:46 kafka | [2024-01-22 13:55:50,452] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,452] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,452] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,452] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,453] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,453] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 15 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,455] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,455] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,455] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,455] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,455] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,455] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,456] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 6 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,456] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,456] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,456] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,456] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,456] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,456] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,456] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,456] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,456] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,456] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,457] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,457] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,457] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,457] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,457] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,457] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,457] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,458] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 7 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,458] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,458] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,458] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,458] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,458] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,458] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,458] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,458] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,458] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,459] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,459] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,459] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,459] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,459] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,459] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,459] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,459] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,459] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,459] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,459] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,460] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,460] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,460] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,460] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,460] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,460] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,460] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,460] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,460] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,460] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,460] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,460] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,461] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,461] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,461] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,461] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,461] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,461] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,461] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,461] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,461] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,461] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,461] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,463] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,463] INFO [Broker id=1] Finished LeaderAndIsr request in 1165ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,463] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,463] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,463] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,463] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,463] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,463] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 13:57:46 kafka | [2024-01-22 13:55:50,465] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=Zh415qvLQvmHe6oa34REOg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,470] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,470] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,470] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,473] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 13:57:46 kafka | [2024-01-22 13:55:50,509] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-e9c969ef-6ee6-4673-ad81-f22b62d5e7d7 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,519] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 79c954dd-4645-472b-b928-ee2d4186f7c1 in Empty state. Created a new member id consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3-90b7f53c-d18d-432e-90f6-302f414d9a2e and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,529] INFO [GroupCoordinator 1]: Preparing to rebalance group 79c954dd-4645-472b-b928-ee2d4186f7c1 in state PreparingRebalance with old generation 0 (__consumer_offsets-26) (reason: Adding new member consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3-90b7f53c-d18d-432e-90f6-302f414d9a2e with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,530] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-e9c969ef-6ee6-4673-ad81-f22b62d5e7d7 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,581] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group e65163a7-0954-4bf8-9924-8c41fa40f9af in Empty state. Created a new member id consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2-bd794398-55d4-4516-bbe7-133fbc5867a3 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:50,585] INFO [GroupCoordinator 1]: Preparing to rebalance group e65163a7-0954-4bf8-9924-8c41fa40f9af in state PreparingRebalance with old generation 0 (__consumer_offsets-41) (reason: Adding new member consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2-bd794398-55d4-4516-bbe7-133fbc5867a3 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:53,541] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:53,546] INFO [GroupCoordinator 1]: Stabilized group 79c954dd-4645-472b-b928-ee2d4186f7c1 generation 1 (__consumer_offsets-26) with 1 members (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:53,570] INFO [GroupCoordinator 1]: Assignment received from leader consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3-90b7f53c-d18d-432e-90f6-302f414d9a2e for group 79c954dd-4645-472b-b928-ee2d4186f7c1 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:53,570] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-e9c969ef-6ee6-4673-ad81-f22b62d5e7d7 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:53,592] INFO [GroupCoordinator 1]: Stabilized group e65163a7-0954-4bf8-9924-8c41fa40f9af generation 1 (__consumer_offsets-41) with 1 members (kafka.coordinator.group.GroupCoordinator) 13:57:46 kafka | [2024-01-22 13:55:53,608] INFO [GroupCoordinator 1]: Assignment received from leader consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2-bd794398-55d4-4516-bbe7-133fbc5867a3 for group e65163a7-0954-4bf8-9924-8c41fa40f9af for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 13:57:46 ++ echo 'Tearing down containers...' 13:57:46 Tearing down containers... 13:57:46 ++ docker-compose down -v --remove-orphans 13:57:47 Stopping policy-apex-pdp ... 13:57:47 Stopping policy-pap ... 13:57:47 Stopping policy-api ... 13:57:47 Stopping kafka ... 13:57:47 Stopping grafana ... 13:57:47 Stopping mariadb ... 13:57:47 Stopping prometheus ... 13:57:47 Stopping simulator ... 13:57:47 Stopping compose_zookeeper_1 ... 13:57:47 Stopping grafana ... done 13:57:47 Stopping prometheus ... done 13:57:57 Stopping policy-apex-pdp ... done 13:58:07 Stopping simulator ... done 13:58:07 Stopping policy-pap ... done 13:58:08 Stopping mariadb ... done 13:58:09 Stopping kafka ... done 13:58:09 Stopping compose_zookeeper_1 ... done 13:58:18 Stopping policy-api ... done 13:58:18 Removing policy-apex-pdp ... 13:58:18 Removing policy-pap ... 13:58:18 Removing policy-api ... 13:58:18 Removing kafka ... 13:58:18 Removing policy-db-migrator ... 13:58:18 Removing grafana ... 13:58:18 Removing mariadb ... 13:58:18 Removing prometheus ... 13:58:18 Removing simulator ... 13:58:18 Removing compose_zookeeper_1 ... 13:58:18 Removing policy-apex-pdp ... done 13:58:18 Removing policy-pap ... done 13:58:18 Removing grafana ... done 13:58:18 Removing kafka ... done 13:58:18 Removing simulator ... done 13:58:18 Removing policy-api ... done 13:58:18 Removing mariadb ... done 13:58:18 Removing prometheus ... done 13:58:18 Removing policy-db-migrator ... done 13:58:18 Removing compose_zookeeper_1 ... done 13:58:18 Removing network compose_default 13:58:18 ++ cd /w/workspace/policy-pap-master-project-csit-verify-pap 13:58:18 + load_set 13:58:18 + _setopts=hxB 13:58:18 ++ echo braceexpand:hashall:interactive-comments:xtrace 13:58:18 ++ tr : ' ' 13:58:18 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:58:18 + set +o braceexpand 13:58:18 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:58:18 + set +o hashall 13:58:18 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:58:18 + set +o interactive-comments 13:58:18 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 13:58:18 + set +o xtrace 13:58:18 ++ echo hxB 13:58:18 ++ sed 's/./& /g' 13:58:18 + for i in $(echo "$_setopts" | sed 's/./& /g') 13:58:18 + set +h 13:58:18 + for i in $(echo "$_setopts" | sed 's/./& /g') 13:58:18 + set +x 13:58:18 + [[ -n /tmp/tmp.vXEmLvt3D4 ]] 13:58:18 + rsync -av /tmp/tmp.vXEmLvt3D4/ /w/workspace/policy-pap-master-project-csit-verify-pap/csit/archives/pap 13:58:18 sending incremental file list 13:58:18 ./ 13:58:18 log.html 13:58:18 output.xml 13:58:18 report.html 13:58:18 testplan.txt 13:58:18 13:58:18 sent 910,840 bytes received 95 bytes 1,821,870.00 bytes/sec 13:58:18 total size is 910,293 speedup is 1.00 13:58:18 + rm -rf /w/workspace/policy-pap-master-project-csit-verify-pap/models 13:58:18 + exit 0 13:58:18 $ ssh-agent -k 13:58:18 unset SSH_AUTH_SOCK; 13:58:18 unset SSH_AGENT_PID; 13:58:18 echo Agent pid 2142 killed; 13:58:18 [ssh-agent] Stopped. 13:58:18 Robot results publisher started... 13:58:18 -Parsing output xml: 13:58:19 Done! 13:58:19 WARNING! Could not find file: **/log.html 13:58:19 WARNING! Could not find file: **/report.html 13:58:19 -Copying log files to build dir: 13:58:19 Done! 13:58:19 -Assigning results to build: 13:58:19 Done! 13:58:19 -Checking thresholds: 13:58:19 Done! 13:58:19 Done publishing Robot results. 13:58:19 [PostBuildScript] - [INFO] Executing post build scripts. 13:58:19 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins12624779721734675757.sh 13:58:19 ---> sysstat.sh 13:58:19 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins11906087679230808903.sh 13:58:19 ---> package-listing.sh 13:58:19 ++ facter osfamily 13:58:19 ++ tr '[:upper:]' '[:lower:]' 13:58:20 + OS_FAMILY=debian 13:58:20 + workspace=/w/workspace/policy-pap-master-project-csit-verify-pap 13:58:20 + START_PACKAGES=/tmp/packages_start.txt 13:58:20 + END_PACKAGES=/tmp/packages_end.txt 13:58:20 + DIFF_PACKAGES=/tmp/packages_diff.txt 13:58:20 + PACKAGES=/tmp/packages_start.txt 13:58:20 + '[' /w/workspace/policy-pap-master-project-csit-verify-pap ']' 13:58:20 + PACKAGES=/tmp/packages_end.txt 13:58:20 + case "${OS_FAMILY}" in 13:58:20 + dpkg -l 13:58:20 + grep '^ii' 13:58:20 + '[' -f /tmp/packages_start.txt ']' 13:58:20 + '[' -f /tmp/packages_end.txt ']' 13:58:20 + diff /tmp/packages_start.txt /tmp/packages_end.txt 13:58:20 + '[' /w/workspace/policy-pap-master-project-csit-verify-pap ']' 13:58:20 + mkdir -p /w/workspace/policy-pap-master-project-csit-verify-pap/archives/ 13:58:20 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-verify-pap/archives/ 13:58:20 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins12204705917561419458.sh 13:58:20 ---> capture-instance-metadata.sh 13:58:20 Setup pyenv: 13:58:20 system 13:58:20 3.8.13 13:58:20 3.9.13 13:58:20 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) 13:58:20 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-l0SW from file:/tmp/.os_lf_venv 13:58:21 lf-activate-venv(): INFO: Installing: lftools 13:58:32 lf-activate-venv(): INFO: Adding /tmp/venv-l0SW/bin to PATH 13:58:32 INFO: Running in OpenStack, capturing instance metadata 13:58:32 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins7571175974915932427.sh 13:58:32 provisioning config files... 13:58:32 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-verify-pap@tmp/config1742631325063875622tmp 13:58:32 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 13:58:32 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 13:58:32 [EnvInject] - Injecting environment variables from a build step. 13:58:32 [EnvInject] - Injecting as environment variables the properties content 13:58:32 SERVER_ID=logs 13:58:32 13:58:32 [EnvInject] - Variables injected successfully. 13:58:32 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins5929045176702373030.sh 13:58:32 ---> create-netrc.sh 13:58:32 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins11283686590870921537.sh 13:58:32 ---> python-tools-install.sh 13:58:32 Setup pyenv: 13:58:32 system 13:58:32 3.8.13 13:58:32 3.9.13 13:58:32 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) 13:58:32 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-l0SW from file:/tmp/.os_lf_venv 13:58:34 lf-activate-venv(): INFO: Installing: lftools 13:58:42 lf-activate-venv(): INFO: Adding /tmp/venv-l0SW/bin to PATH 13:58:42 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins4327540658111996079.sh 13:58:42 ---> sudo-logs.sh 13:58:42 Archiving 'sudo' log.. 13:58:42 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins16741567697499941198.sh 13:58:42 ---> job-cost.sh 13:58:42 Setup pyenv: 13:58:42 system 13:58:42 3.8.13 13:58:42 3.9.13 13:58:42 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) 13:58:42 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-l0SW from file:/tmp/.os_lf_venv 13:58:44 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 13:58:51 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 13:58:51 lftools 0.37.8 requires openstacksdk<1.5.0, but you have openstacksdk 2.1.0 which is incompatible. 13:58:52 lf-activate-venv(): INFO: Adding /tmp/venv-l0SW/bin to PATH 13:58:52 INFO: No Stack... 13:58:52 INFO: Retrieving Pricing Info for: v3-standard-8 13:58:52 INFO: Archiving Costs 13:58:52 [policy-pap-master-project-csit-verify-pap] $ /bin/bash -l /tmp/jenkins478837406985277266.sh 13:58:52 ---> logs-deploy.sh 13:58:52 Setup pyenv: 13:58:52 system 13:58:52 3.8.13 13:58:52 3.9.13 13:58:52 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) 13:58:53 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-l0SW from file:/tmp/.os_lf_venv 13:58:54 lf-activate-venv(): INFO: Installing: lftools 13:59:03 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 13:59:03 python-openstackclient 6.4.0 requires openstacksdk>=2.0.0, but you have openstacksdk 1.4.0 which is incompatible. 13:59:04 lf-activate-venv(): INFO: Adding /tmp/venv-l0SW/bin to PATH 13:59:04 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-verify-pap/504 13:59:04 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 13:59:05 Archives upload complete. 13:59:05 INFO: archiving logs to Nexus 13:59:06 ---> uname -a: 13:59:06 Linux prd-ubuntu1804-docker-8c-8g-14213 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 13:59:06 13:59:06 13:59:06 ---> lscpu: 13:59:06 Architecture: x86_64 13:59:06 CPU op-mode(s): 32-bit, 64-bit 13:59:06 Byte Order: Little Endian 13:59:06 CPU(s): 8 13:59:06 On-line CPU(s) list: 0-7 13:59:06 Thread(s) per core: 1 13:59:06 Core(s) per socket: 1 13:59:06 Socket(s): 8 13:59:06 NUMA node(s): 1 13:59:06 Vendor ID: AuthenticAMD 13:59:06 CPU family: 23 13:59:06 Model: 49 13:59:06 Model name: AMD EPYC-Rome Processor 13:59:06 Stepping: 0 13:59:06 CPU MHz: 2800.000 13:59:06 BogoMIPS: 5600.00 13:59:06 Virtualization: AMD-V 13:59:06 Hypervisor vendor: KVM 13:59:06 Virtualization type: full 13:59:06 L1d cache: 32K 13:59:06 L1i cache: 32K 13:59:06 L2 cache: 512K 13:59:06 L3 cache: 16384K 13:59:06 NUMA node0 CPU(s): 0-7 13:59:06 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 13:59:06 13:59:06 13:59:06 ---> nproc: 13:59:06 8 13:59:06 13:59:06 13:59:06 ---> df -h: 13:59:06 Filesystem Size Used Avail Use% Mounted on 13:59:06 udev 16G 0 16G 0% /dev 13:59:06 tmpfs 3.2G 708K 3.2G 1% /run 13:59:06 /dev/vda1 155G 14G 142G 9% / 13:59:06 tmpfs 16G 0 16G 0% /dev/shm 13:59:06 tmpfs 5.0M 0 5.0M 0% /run/lock 13:59:06 tmpfs 16G 0 16G 0% /sys/fs/cgroup 13:59:06 /dev/vda15 105M 4.4M 100M 5% /boot/efi 13:59:06 tmpfs 3.2G 0 3.2G 0% /run/user/1001 13:59:06 13:59:06 13:59:06 ---> free -m: 13:59:06 total used free shared buff/cache available 13:59:06 Mem: 32167 864 24831 0 6471 30846 13:59:06 Swap: 1023 0 1023 13:59:06 13:59:06 13:59:06 ---> ip addr: 13:59:06 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 13:59:06 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 13:59:06 inet 127.0.0.1/8 scope host lo 13:59:06 valid_lft forever preferred_lft forever 13:59:06 inet6 ::1/128 scope host 13:59:06 valid_lft forever preferred_lft forever 13:59:06 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 13:59:06 link/ether fa:16:3e:82:5e:43 brd ff:ff:ff:ff:ff:ff 13:59:06 inet 10.30.106.203/23 brd 10.30.107.255 scope global dynamic ens3 13:59:06 valid_lft 85856sec preferred_lft 85856sec 13:59:06 inet6 fe80::f816:3eff:fe82:5e43/64 scope link 13:59:06 valid_lft forever preferred_lft forever 13:59:06 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 13:59:06 link/ether 02:42:8e:3d:20:ac brd ff:ff:ff:ff:ff:ff 13:59:06 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 13:59:06 valid_lft forever preferred_lft forever 13:59:06 13:59:06 13:59:06 ---> sar -b -r -n DEV: 13:59:06 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-14213) 01/22/24 _x86_64_ (8 CPU) 13:59:06 13:59:06 13:50:08 LINUX RESTART (8 CPU) 13:59:06 13:59:06 13:51:01 tps rtps wtps bread/s bwrtn/s 13:59:06 13:52:01 84.00 17.58 66.42 1020.10 24283.95 13:59:06 13:53:01 119.25 13.81 105.43 1118.08 32574.70 13:59:06 13:54:01 136.59 9.30 127.30 1653.86 66773.80 13:59:06 13:55:01 142.26 0.08 142.18 5.07 98718.35 13:59:06 13:56:01 312.11 14.20 297.92 765.54 36203.25 13:59:06 13:57:01 20.40 0.00 20.40 0.00 20855.96 13:59:06 13:58:01 28.21 0.05 28.16 10.66 21839.96 13:59:06 13:59:01 74.58 1.93 72.64 111.96 10231.37 13:59:06 Average: 114.67 7.12 107.55 585.65 38934.57 13:59:06 13:59:06 13:51:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 13:59:06 13:52:01 30147712 31712548 2791508 8.47 66568 1810176 1403648 4.13 856312 1645796 149616 13:59:06 13:53:01 29809112 31705804 3130108 9.50 85428 2106372 1395464 4.11 871720 1932292 150628 13:59:06 13:54:01 27194084 31650836 5745136 17.44 127800 4516200 1423944 4.19 1022104 4253468 1064928 13:59:06 13:55:01 25506560 31650528 7432660 22.56 140824 6122580 1511368 4.45 1037636 5858688 130060 13:59:06 13:56:01 23276472 29605044 9662748 29.34 156272 6271740 8859488 26.07 3257624 5789316 1352 13:59:06 13:57:01 23318964 29648248 9620256 29.21 156476 6272016 8793908 25.87 3216756 5786772 208 13:59:06 13:58:01 23552240 29907080 9386980 28.50 156880 6300196 7238800 21.30 2980640 5800952 200 13:59:06 13:59:01 25402484 31561184 7536736 22.88 160996 6115276 1565520 4.61 1337196 5647320 32196 13:59:06 Average: 26025954 30930159 6913266 20.99 131406 4939320 4024018 11.84 1822498 4589326 191148 13:59:06 13:59:06 13:51:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 13:59:06 13:52:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:59:06 13:52:01 lo 1.13 1.13 0.12 0.12 0.00 0.00 0.00 0.00 13:59:06 13:52:01 ens3 51.46 36.99 791.65 6.38 0.00 0.00 0.00 0.00 13:59:06 13:53:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:59:06 13:53:01 lo 1.40 1.40 0.14 0.14 0.00 0.00 0.00 0.00 13:59:06 13:53:01 ens3 61.36 46.64 871.59 8.46 0.00 0.00 0.00 0.00 13:59:06 13:54:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:59:06 13:54:01 lo 8.33 8.33 0.81 0.81 0.00 0.00 0.00 0.00 13:59:06 13:54:01 ens3 680.72 379.49 17418.18 28.40 0.00 0.00 0.00 0.00 13:59:06 13:54:01 br-30abeaa9709f 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:59:06 13:55:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:59:06 13:55:01 lo 4.13 4.13 0.38 0.38 0.00 0.00 0.00 0.00 13:59:06 13:55:01 ens3 500.45 267.32 16093.65 19.42 0.00 0.00 0.00 0.00 13:59:06 13:55:01 br-30abeaa9709f 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:59:06 13:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:59:06 13:56:01 veth61bc196 5.05 6.50 0.81 0.92 0.00 0.00 0.00 0.00 13:59:06 13:56:01 veth64d3651 0.55 0.93 0.06 0.32 0.00 0.00 0.00 0.00 13:59:06 13:56:01 veth66d2419 0.33 0.73 0.03 0.65 0.00 0.00 0.00 0.00 13:59:06 13:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:59:06 13:57:01 veth61bc196 0.17 0.35 0.01 0.02 0.00 0.00 0.00 0.00 13:59:06 13:57:01 veth64d3651 0.27 0.22 0.02 0.01 0.00 0.00 0.00 0.00 13:59:06 13:57:01 veth66d2419 0.53 0.53 0.05 1.51 0.00 0.00 0.00 0.00 13:59:06 13:58:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:59:06 13:58:01 veth61bc196 0.17 0.48 0.01 0.03 0.00 0.00 0.00 0.00 13:59:06 13:58:01 vetha86a375 53.97 48.13 21.03 40.49 0.00 0.00 0.00 0.00 13:59:06 13:58:01 veth571a967 0.00 0.58 0.00 0.03 0.00 0.00 0.00 0.00 13:59:06 13:59:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:59:06 13:59:01 lo 35.72 35.72 6.25 6.25 0.00 0.00 0.00 0.00 13:59:06 13:59:01 ens3 1723.59 1032.07 36087.21 158.65 0.00 0.00 0.00 0.00 13:59:06 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:59:06 Average: lo 3.97 3.97 0.74 0.74 0.00 0.00 0.00 0.00 13:59:06 Average: ens3 171.88 99.74 4412.81 12.66 0.00 0.00 0.00 0.00 13:59:06 13:59:06 13:59:06 ---> sar -P ALL: 13:59:06 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-14213) 01/22/24 _x86_64_ (8 CPU) 13:59:06 13:59:06 13:50:08 LINUX RESTART (8 CPU) 13:59:06 13:59:06 13:51:01 CPU %user %nice %system %iowait %steal %idle 13:59:06 13:52:01 all 7.44 0.00 0.57 6.14 0.04 85.81 13:59:06 13:52:01 0 18.13 0.00 0.90 13.77 0.05 67.15 13:59:06 13:52:01 1 29.05 0.00 1.82 2.56 0.13 66.43 13:59:06 13:52:01 2 2.49 0.00 0.30 0.30 0.05 96.86 13:59:06 13:52:01 3 4.99 0.00 0.33 0.37 0.02 94.29 13:59:06 13:52:01 4 0.80 0.00 0.35 0.57 0.00 98.28 13:59:06 13:52:01 5 0.52 0.00 0.28 2.55 0.00 96.65 13:59:06 13:52:01 6 0.62 0.00 0.32 28.70 0.03 70.33 13:59:06 13:52:01 7 3.00 0.00 0.30 0.33 0.02 96.35 13:59:06 13:53:01 all 9.75 0.00 0.71 4.15 0.03 85.35 13:59:06 13:53:01 0 0.30 0.00 0.33 22.41 0.02 76.94 13:59:06 13:53:01 1 5.62 0.00 0.58 1.67 0.02 92.12 13:59:06 13:53:01 2 31.39 0.00 1.87 2.83 0.07 63.85 13:59:06 13:53:01 3 14.58 0.00 1.02 0.27 0.03 84.10 13:59:06 13:53:01 4 2.75 0.00 0.42 1.64 0.05 95.14 13:59:06 13:53:01 5 12.41 0.00 0.42 1.02 0.03 86.12 13:59:06 13:53:01 6 4.13 0.00 0.57 2.78 0.02 92.50 13:59:06 13:53:01 7 6.91 0.00 0.43 0.62 0.02 92.02 13:59:06 13:54:01 all 9.09 0.00 3.60 10.41 0.07 76.83 13:59:06 13:54:01 0 8.60 0.00 3.77 11.03 0.08 76.51 13:59:06 13:54:01 1 7.22 0.00 4.19 18.84 0.05 69.69 13:59:06 13:54:01 2 8.93 0.00 2.95 15.82 0.05 72.25 13:59:06 13:54:01 3 8.54 0.00 3.71 27.39 0.05 60.31 13:59:06 13:54:01 4 10.72 0.00 3.91 1.93 0.08 83.36 13:59:06 13:54:01 5 9.13 0.00 4.40 0.70 0.07 85.71 13:59:06 13:54:01 6 8.38 0.00 2.53 6.95 0.07 82.07 13:59:06 13:54:01 7 11.17 0.00 3.38 0.61 0.08 84.76 13:59:06 13:55:01 all 5.81 0.00 2.47 11.66 0.04 80.02 13:59:06 13:55:01 0 5.70 0.00 2.93 2.30 0.05 89.02 13:59:06 13:55:01 1 5.99 0.00 2.52 42.35 0.05 49.09 13:59:06 13:55:01 2 3.25 0.00 2.51 10.22 0.03 83.99 13:59:06 13:55:01 3 6.32 0.00 2.75 7.31 0.05 83.58 13:59:06 13:55:01 4 6.78 0.00 2.60 6.78 0.03 83.81 13:59:06 13:55:01 5 9.26 0.00 1.48 0.08 0.03 89.14 13:59:06 13:55:01 6 4.25 0.00 1.85 24.10 0.05 69.75 13:59:06 13:55:01 7 4.91 0.00 3.09 0.35 0.03 91.61 13:59:06 13:56:01 all 28.36 0.00 3.93 4.13 0.11 63.47 13:59:06 13:56:01 0 35.02 0.00 4.84 0.24 0.10 59.81 13:59:06 13:56:01 1 33.77 0.00 4.37 5.45 0.12 56.29 13:59:06 13:56:01 2 22.89 0.00 3.92 1.48 0.12 71.59 13:59:06 13:56:01 3 26.79 0.00 4.05 10.40 0.14 58.62 13:59:06 13:56:01 4 23.71 0.00 3.29 12.01 0.12 60.88 13:59:06 13:56:01 5 31.64 0.00 4.11 1.38 0.12 62.75 13:59:06 13:56:01 6 23.43 0.00 3.50 0.71 0.12 72.24 13:59:06 13:56:01 7 29.63 0.00 3.40 1.37 0.10 65.50 13:59:06 13:57:01 all 4.68 0.00 0.51 1.60 0.05 93.16 13:59:06 13:57:01 0 3.44 0.00 0.68 0.02 0.08 95.78 13:59:06 13:57:01 1 4.71 0.00 0.37 0.02 0.05 94.86 13:59:06 13:57:01 2 5.88 0.00 0.64 0.12 0.03 93.33 13:59:06 13:57:01 3 4.98 0.00 0.69 0.10 0.05 94.18 13:59:06 13:57:01 4 6.37 0.00 0.54 12.46 0.05 80.59 13:59:06 13:57:01 5 4.66 0.00 0.47 0.00 0.05 94.82 13:59:06 13:57:01 6 3.17 0.00 0.33 0.10 0.03 96.36 13:59:06 13:57:01 7 4.22 0.00 0.33 0.00 0.03 95.41 13:59:06 13:58:01 all 1.57 0.00 0.37 1.67 0.04 96.34 13:59:06 13:58:01 0 1.69 0.00 0.45 0.00 0.07 97.79 13:59:06 13:58:01 1 3.04 0.00 0.35 0.02 0.03 96.57 13:59:06 13:58:01 2 1.39 0.00 0.38 0.10 0.05 98.08 13:59:06 13:58:01 3 2.31 0.00 0.47 0.10 0.03 97.09 13:59:06 13:58:01 4 0.75 0.00 0.38 13.03 0.03 85.80 13:59:06 13:58:01 5 1.13 0.00 0.40 0.02 0.03 98.42 13:59:06 13:58:01 6 0.85 0.00 0.32 0.00 0.03 98.80 13:59:06 13:58:01 7 1.42 0.00 0.23 0.15 0.03 98.16 13:59:06 13:59:01 all 8.24 0.00 0.70 1.07 0.03 89.94 13:59:06 13:59:01 0 5.04 0.00 0.75 0.05 0.03 94.13 13:59:06 13:59:01 1 12.78 0.00 0.72 0.59 0.03 85.89 13:59:06 13:59:01 2 3.22 0.00 0.52 0.25 0.02 96.00 13:59:06 13:59:01 3 0.73 0.00 0.43 0.13 0.02 98.68 13:59:06 13:59:01 4 4.69 0.00 0.73 6.09 0.03 88.45 13:59:06 13:59:01 5 24.92 0.00 1.30 0.43 0.07 73.28 13:59:06 13:59:01 6 0.99 0.00 0.53 0.87 0.05 97.56 13:59:06 13:59:01 7 13.59 0.00 0.65 0.20 0.03 85.53 13:59:06 Average: all 9.35 0.00 1.60 5.09 0.05 83.90 13:59:06 Average: 0 9.72 0.00 1.83 6.23 0.06 82.17 13:59:06 Average: 1 12.74 0.00 1.86 8.87 0.06 76.48 13:59:06 Average: 2 9.92 0.00 1.63 3.87 0.05 84.53 13:59:06 Average: 3 8.63 0.00 1.67 5.72 0.05 83.92 13:59:06 Average: 4 7.05 0.00 1.52 6.81 0.05 84.57 13:59:06 Average: 5 11.70 0.00 1.60 0.77 0.05 85.88 13:59:06 Average: 6 5.71 0.00 1.24 8.01 0.05 84.99 13:59:06 Average: 7 9.33 0.00 1.47 0.45 0.04 88.70 13:59:06 13:59:06 13:59:06