11:55:53 Started by upstream project "policy-docker-master-merge-java" build number 333 11:55:53 originally caused by: 11:55:53 Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/137060 11:55:53 Running as SYSTEM 11:55:53 [EnvInject] - Loading node environment variables. 11:55:53 Building remotely on prd-ubuntu1804-docker-8c-8g-14552 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 11:55:53 [ssh-agent] Looking for ssh-agent implementation... 11:55:53 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 11:55:53 $ ssh-agent 11:55:53 SSH_AUTH_SOCK=/tmp/ssh-2G3xVJK9wIwP/agent.2121 11:55:53 SSH_AGENT_PID=2123 11:55:53 [ssh-agent] Started. 11:55:53 Running ssh-add (command line suppressed) 11:55:53 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_2662552322566714215.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_2662552322566714215.key) 11:55:53 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 11:55:53 The recommended git tool is: NONE 11:55:56 using credential onap-jenkins-ssh 11:55:56 Wiping out workspace first. 11:55:56 Cloning the remote Git repository 11:55:56 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 11:55:56 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 11:55:56 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 11:55:56 > git --version # timeout=10 11:55:56 > git --version # 'git version 2.17.1' 11:55:56 using GIT_SSH to set credentials Gerrit user 11:55:56 Verifying host key using manually-configured host key entries 11:55:56 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 11:55:56 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 11:55:56 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 11:55:57 Avoid second fetch 11:55:57 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 11:55:57 Checking out Revision 31c61d495474985b8cc3460464f888651d0919ed (refs/remotes/origin/master) 11:55:57 > git config core.sparsecheckout # timeout=10 11:55:57 > git checkout -f 31c61d495474985b8cc3460464f888651d0919ed # timeout=30 11:55:57 Commit message: "Add kafka support in K8s CSIT" 11:55:57 > git rev-list --no-walk caa7adc30ed054d2a5cfea4a1b9a265d5cfb6785 # timeout=10 11:55:57 provisioning config files... 11:55:57 copy managed file [npmrc] to file:/home/jenkins/.npmrc 11:55:57 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 11:55:57 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17407350503572915132.sh 11:55:57 ---> python-tools-install.sh 11:55:57 Setup pyenv: 11:55:57 * system (set by /opt/pyenv/version) 11:55:57 * 3.8.13 (set by /opt/pyenv/version) 11:55:57 * 3.9.13 (set by /opt/pyenv/version) 11:55:57 * 3.10.6 (set by /opt/pyenv/version) 11:56:02 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-dyeB 11:56:02 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 11:56:05 lf-activate-venv(): INFO: Installing: lftools 11:56:46 lf-activate-venv(): INFO: Adding /tmp/venv-dyeB/bin to PATH 11:56:46 Generating Requirements File 11:57:17 Python 3.10.6 11:57:17 pip 23.3.2 from /tmp/venv-dyeB/lib/python3.10/site-packages/pip (python 3.10) 11:57:17 appdirs==1.4.4 11:57:17 argcomplete==3.2.1 11:57:17 aspy.yaml==1.3.0 11:57:17 attrs==23.2.0 11:57:17 autopage==0.5.2 11:57:17 beautifulsoup4==4.12.3 11:57:17 boto3==1.34.25 11:57:17 botocore==1.34.25 11:57:17 bs4==0.0.2 11:57:17 cachetools==5.3.2 11:57:17 certifi==2023.11.17 11:57:17 cffi==1.16.0 11:57:17 cfgv==3.4.0 11:57:17 chardet==5.2.0 11:57:17 charset-normalizer==3.3.2 11:57:17 click==8.1.7 11:57:17 cliff==4.5.0 11:57:17 cmd2==2.4.3 11:57:17 cryptography==3.3.2 11:57:17 debtcollector==2.5.0 11:57:17 decorator==5.1.1 11:57:17 defusedxml==0.7.1 11:57:17 Deprecated==1.2.14 11:57:17 distlib==0.3.8 11:57:17 dnspython==2.5.0 11:57:17 docker==4.2.2 11:57:17 dogpile.cache==1.3.0 11:57:17 email-validator==2.1.0.post1 11:57:17 filelock==3.13.1 11:57:17 future==0.18.3 11:57:17 gitdb==4.0.11 11:57:17 GitPython==3.1.41 11:57:17 google-auth==2.26.2 11:57:17 httplib2==0.22.0 11:57:17 identify==2.5.33 11:57:17 idna==3.6 11:57:17 importlib-resources==1.5.0 11:57:17 iso8601==2.1.0 11:57:17 Jinja2==3.1.3 11:57:17 jmespath==1.0.1 11:57:17 jsonpatch==1.33 11:57:17 jsonpointer==2.4 11:57:17 jsonschema==4.21.1 11:57:17 jsonschema-specifications==2023.12.1 11:57:17 keystoneauth1==5.5.0 11:57:17 kubernetes==29.0.0 11:57:17 lftools==0.37.8 11:57:17 lxml==5.1.0 11:57:17 MarkupSafe==2.1.4 11:57:17 msgpack==1.0.7 11:57:17 multi_key_dict==2.0.3 11:57:17 munch==4.0.0 11:57:17 netaddr==0.10.1 11:57:17 netifaces==0.11.0 11:57:17 niet==1.4.2 11:57:17 nodeenv==1.8.0 11:57:17 oauth2client==4.1.3 11:57:17 oauthlib==3.2.2 11:57:17 openstacksdk==0.62.0 11:57:17 os-client-config==2.1.0 11:57:17 os-service-types==1.7.0 11:57:17 osc-lib==3.0.0 11:57:17 oslo.config==9.3.0 11:57:17 oslo.context==5.3.0 11:57:17 oslo.i18n==6.2.0 11:57:17 oslo.log==5.4.0 11:57:17 oslo.serialization==5.3.0 11:57:17 oslo.utils==7.0.0 11:57:17 packaging==23.2 11:57:17 pbr==6.0.0 11:57:17 platformdirs==4.1.0 11:57:17 prettytable==3.9.0 11:57:17 pyasn1==0.5.1 11:57:17 pyasn1-modules==0.3.0 11:57:17 pycparser==2.21 11:57:17 pygerrit2==2.0.15 11:57:17 PyGithub==2.1.1 11:57:17 pyinotify==0.9.6 11:57:17 PyJWT==2.8.0 11:57:17 PyNaCl==1.5.0 11:57:17 pyparsing==2.4.7 11:57:17 pyperclip==1.8.2 11:57:17 pyrsistent==0.20.0 11:57:17 python-cinderclient==9.4.0 11:57:17 python-dateutil==2.8.2 11:57:17 python-heatclient==3.4.0 11:57:17 python-jenkins==1.8.2 11:57:17 python-keystoneclient==5.3.0 11:57:17 python-magnumclient==4.3.0 11:57:17 python-novaclient==18.4.0 11:57:17 python-openstackclient==6.0.0 11:57:17 python-swiftclient==4.4.0 11:57:17 pytz==2023.3.post1 11:57:17 PyYAML==6.0.1 11:57:17 referencing==0.32.1 11:57:17 requests==2.31.0 11:57:17 requests-oauthlib==1.3.1 11:57:17 requestsexceptions==1.4.0 11:57:17 rfc3986==2.0.0 11:57:17 rpds-py==0.17.1 11:57:17 rsa==4.9 11:57:17 ruamel.yaml==0.18.5 11:57:17 ruamel.yaml.clib==0.2.8 11:57:17 s3transfer==0.10.0 11:57:17 simplejson==3.19.2 11:57:17 six==1.16.0 11:57:17 smmap==5.0.1 11:57:17 soupsieve==2.5 11:57:17 stevedore==5.1.0 11:57:17 tabulate==0.9.0 11:57:17 toml==0.10.2 11:57:17 tomlkit==0.12.3 11:57:17 tqdm==4.66.1 11:57:17 typing_extensions==4.9.0 11:57:17 tzdata==2023.4 11:57:17 urllib3==1.26.18 11:57:17 virtualenv==20.25.0 11:57:17 wcwidth==0.2.13 11:57:17 websocket-client==1.7.0 11:57:17 wrapt==1.16.0 11:57:17 xdg==6.0.0 11:57:17 xmltodict==0.13.0 11:57:17 yq==3.2.3 11:57:18 [EnvInject] - Injecting environment variables from a build step. 11:57:18 [EnvInject] - Injecting as environment variables the properties content 11:57:18 SET_JDK_VERSION=openjdk17 11:57:18 GIT_URL="git://cloud.onap.org/mirror" 11:57:18 11:57:18 [EnvInject] - Variables injected successfully. 11:57:18 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins15193114834158797893.sh 11:57:18 ---> update-java-alternatives.sh 11:57:18 ---> Updating Java version 11:57:18 ---> Ubuntu/Debian system detected 11:57:18 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 11:57:18 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 11:57:18 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 11:57:18 openjdk version "17.0.4" 2022-07-19 11:57:18 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 11:57:18 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 11:57:18 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 11:57:18 [EnvInject] - Injecting environment variables from a build step. 11:57:18 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 11:57:18 [EnvInject] - Variables injected successfully. 11:57:18 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins11297563122817450193.sh 11:57:18 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 11:57:18 + set +u 11:57:18 + save_set 11:57:18 + RUN_CSIT_SAVE_SET=ehxB 11:57:18 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 11:57:18 + '[' 1 -eq 0 ']' 11:57:18 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 11:57:18 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 11:57:18 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 11:57:18 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 11:57:18 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 11:57:18 + export ROBOT_VARIABLES= 11:57:18 + ROBOT_VARIABLES= 11:57:18 + export PROJECT=pap 11:57:18 + PROJECT=pap 11:57:18 + cd /w/workspace/policy-pap-master-project-csit-pap 11:57:18 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 11:57:18 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 11:57:18 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 11:57:18 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 11:57:18 + relax_set 11:57:18 + set +e 11:57:18 + set +o pipefail 11:57:18 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 11:57:18 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 11:57:18 +++ mktemp -d 11:57:18 ++ ROBOT_VENV=/tmp/tmp.yejbioFAjC 11:57:18 ++ echo ROBOT_VENV=/tmp/tmp.yejbioFAjC 11:57:18 +++ python3 --version 11:57:18 ++ echo 'Python version is: Python 3.6.9' 11:57:18 Python version is: Python 3.6.9 11:57:18 ++ python3 -m venv --clear /tmp/tmp.yejbioFAjC 11:57:20 ++ source /tmp/tmp.yejbioFAjC/bin/activate 11:57:20 +++ deactivate nondestructive 11:57:20 +++ '[' -n '' ']' 11:57:20 +++ '[' -n '' ']' 11:57:20 +++ '[' -n /bin/bash -o -n '' ']' 11:57:20 +++ hash -r 11:57:20 +++ '[' -n '' ']' 11:57:20 +++ unset VIRTUAL_ENV 11:57:20 +++ '[' '!' nondestructive = nondestructive ']' 11:57:20 +++ VIRTUAL_ENV=/tmp/tmp.yejbioFAjC 11:57:20 +++ export VIRTUAL_ENV 11:57:20 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 11:57:20 +++ PATH=/tmp/tmp.yejbioFAjC/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 11:57:20 +++ export PATH 11:57:20 +++ '[' -n '' ']' 11:57:20 +++ '[' -z '' ']' 11:57:20 +++ _OLD_VIRTUAL_PS1= 11:57:20 +++ '[' 'x(tmp.yejbioFAjC) ' '!=' x ']' 11:57:20 +++ PS1='(tmp.yejbioFAjC) ' 11:57:20 +++ export PS1 11:57:20 +++ '[' -n /bin/bash -o -n '' ']' 11:57:20 +++ hash -r 11:57:20 ++ set -exu 11:57:20 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 11:57:23 ++ echo 'Installing Python Requirements' 11:57:23 Installing Python Requirements 11:57:23 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 11:57:45 ++ python3 -m pip -qq freeze 11:57:45 bcrypt==4.0.1 11:57:45 beautifulsoup4==4.12.3 11:57:45 bitarray==2.9.2 11:57:45 certifi==2023.11.17 11:57:45 cffi==1.15.1 11:57:45 charset-normalizer==2.0.12 11:57:45 cryptography==40.0.2 11:57:45 decorator==5.1.1 11:57:45 elasticsearch==7.17.9 11:57:45 elasticsearch-dsl==7.4.1 11:57:45 enum34==1.1.10 11:57:45 idna==3.6 11:57:45 importlib-resources==5.4.0 11:57:45 ipaddr==2.2.0 11:57:45 isodate==0.6.1 11:57:45 jmespath==0.10.0 11:57:45 jsonpatch==1.32 11:57:45 jsonpath-rw==1.4.0 11:57:45 jsonpointer==2.3 11:57:45 lxml==5.1.0 11:57:45 netaddr==0.8.0 11:57:45 netifaces==0.11.0 11:57:45 odltools==0.1.28 11:57:45 paramiko==3.4.0 11:57:45 pkg_resources==0.0.0 11:57:45 ply==3.11 11:57:45 pyang==2.6.0 11:57:45 pyangbind==0.8.1 11:57:45 pycparser==2.21 11:57:45 pyhocon==0.3.60 11:57:45 PyNaCl==1.5.0 11:57:45 pyparsing==3.1.1 11:57:45 python-dateutil==2.8.2 11:57:45 regex==2023.8.8 11:57:45 requests==2.27.1 11:57:45 robotframework==6.1.1 11:57:45 robotframework-httplibrary==0.4.2 11:57:45 robotframework-pythonlibcore==3.0.0 11:57:45 robotframework-requests==0.9.4 11:57:45 robotframework-selenium2library==3.0.0 11:57:45 robotframework-seleniumlibrary==5.1.3 11:57:45 robotframework-sshlibrary==3.8.0 11:57:45 scapy==2.5.0 11:57:45 scp==0.14.5 11:57:45 selenium==3.141.0 11:57:45 six==1.16.0 11:57:45 soupsieve==2.3.2.post1 11:57:45 urllib3==1.26.18 11:57:45 waitress==2.0.0 11:57:45 WebOb==1.8.7 11:57:45 WebTest==3.0.0 11:57:45 zipp==3.6.0 11:57:45 ++ mkdir -p /tmp/tmp.yejbioFAjC/src/onap 11:57:45 ++ rm -rf /tmp/tmp.yejbioFAjC/src/onap/testsuite 11:57:45 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 11:57:52 ++ echo 'Installing python confluent-kafka library' 11:57:52 Installing python confluent-kafka library 11:57:52 ++ python3 -m pip install -qq confluent-kafka 11:57:53 ++ echo 'Uninstall docker-py and reinstall docker.' 11:57:53 Uninstall docker-py and reinstall docker. 11:57:53 ++ python3 -m pip uninstall -y -qq docker 11:57:53 ++ python3 -m pip install -U -qq docker 11:57:54 ++ python3 -m pip -qq freeze 11:57:55 bcrypt==4.0.1 11:57:55 beautifulsoup4==4.12.3 11:57:55 bitarray==2.9.2 11:57:55 certifi==2023.11.17 11:57:55 cffi==1.15.1 11:57:55 charset-normalizer==2.0.12 11:57:55 confluent-kafka==2.3.0 11:57:55 cryptography==40.0.2 11:57:55 decorator==5.1.1 11:57:55 deepdiff==5.7.0 11:57:55 dnspython==2.2.1 11:57:55 docker==5.0.3 11:57:55 elasticsearch==7.17.9 11:57:55 elasticsearch-dsl==7.4.1 11:57:55 enum34==1.1.10 11:57:55 future==0.18.3 11:57:55 idna==3.6 11:57:55 importlib-resources==5.4.0 11:57:55 ipaddr==2.2.0 11:57:55 isodate==0.6.1 11:57:55 Jinja2==3.0.3 11:57:55 jmespath==0.10.0 11:57:55 jsonpatch==1.32 11:57:55 jsonpath-rw==1.4.0 11:57:55 jsonpointer==2.3 11:57:55 kafka-python==2.0.2 11:57:55 lxml==5.1.0 11:57:55 MarkupSafe==2.0.1 11:57:55 more-itertools==5.0.0 11:57:55 netaddr==0.8.0 11:57:55 netifaces==0.11.0 11:57:55 odltools==0.1.28 11:57:55 ordered-set==4.0.2 11:57:55 paramiko==3.4.0 11:57:55 pbr==6.0.0 11:57:55 pkg_resources==0.0.0 11:57:55 ply==3.11 11:57:55 protobuf==3.19.6 11:57:55 pyang==2.6.0 11:57:55 pyangbind==0.8.1 11:57:55 pycparser==2.21 11:57:55 pyhocon==0.3.60 11:57:55 PyNaCl==1.5.0 11:57:55 pyparsing==3.1.1 11:57:55 python-dateutil==2.8.2 11:57:55 PyYAML==6.0.1 11:57:55 regex==2023.8.8 11:57:55 requests==2.27.1 11:57:55 robotframework==6.1.1 11:57:55 robotframework-httplibrary==0.4.2 11:57:55 robotframework-onap==0.6.0.dev105 11:57:55 robotframework-pythonlibcore==3.0.0 11:57:55 robotframework-requests==0.9.4 11:57:55 robotframework-selenium2library==3.0.0 11:57:55 robotframework-seleniumlibrary==5.1.3 11:57:55 robotframework-sshlibrary==3.8.0 11:57:55 robotlibcore-temp==1.0.2 11:57:55 scapy==2.5.0 11:57:55 scp==0.14.5 11:57:55 selenium==3.141.0 11:57:55 six==1.16.0 11:57:55 soupsieve==2.3.2.post1 11:57:55 urllib3==1.26.18 11:57:55 waitress==2.0.0 11:57:55 WebOb==1.8.7 11:57:55 websocket-client==1.3.1 11:57:55 WebTest==3.0.0 11:57:55 zipp==3.6.0 11:57:55 ++ uname 11:57:55 ++ grep -q Linux 11:57:55 ++ sudo apt-get -y -qq install libxml2-utils 11:57:55 + load_set 11:57:55 + _setopts=ehuxB 11:57:55 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 11:57:55 ++ tr : ' ' 11:57:55 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:57:55 + set +o braceexpand 11:57:55 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:57:55 + set +o hashall 11:57:55 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:57:55 + set +o interactive-comments 11:57:55 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:57:55 + set +o nounset 11:57:55 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:57:55 + set +o xtrace 11:57:55 ++ echo ehuxB 11:57:55 ++ sed 's/./& /g' 11:57:55 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:57:55 + set +e 11:57:55 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:57:55 + set +h 11:57:55 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:57:55 + set +u 11:57:55 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:57:55 + set +x 11:57:55 + source_safely /tmp/tmp.yejbioFAjC/bin/activate 11:57:55 + '[' -z /tmp/tmp.yejbioFAjC/bin/activate ']' 11:57:55 + relax_set 11:57:55 + set +e 11:57:55 + set +o pipefail 11:57:55 + . /tmp/tmp.yejbioFAjC/bin/activate 11:57:55 ++ deactivate nondestructive 11:57:55 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 11:57:55 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 11:57:55 ++ export PATH 11:57:55 ++ unset _OLD_VIRTUAL_PATH 11:57:55 ++ '[' -n '' ']' 11:57:55 ++ '[' -n /bin/bash -o -n '' ']' 11:57:55 ++ hash -r 11:57:55 ++ '[' -n '' ']' 11:57:55 ++ unset VIRTUAL_ENV 11:57:55 ++ '[' '!' nondestructive = nondestructive ']' 11:57:55 ++ VIRTUAL_ENV=/tmp/tmp.yejbioFAjC 11:57:55 ++ export VIRTUAL_ENV 11:57:55 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 11:57:55 ++ PATH=/tmp/tmp.yejbioFAjC/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 11:57:55 ++ export PATH 11:57:55 ++ '[' -n '' ']' 11:57:55 ++ '[' -z '' ']' 11:57:55 ++ _OLD_VIRTUAL_PS1='(tmp.yejbioFAjC) ' 11:57:55 ++ '[' 'x(tmp.yejbioFAjC) ' '!=' x ']' 11:57:55 ++ PS1='(tmp.yejbioFAjC) (tmp.yejbioFAjC) ' 11:57:55 ++ export PS1 11:57:55 ++ '[' -n /bin/bash -o -n '' ']' 11:57:55 ++ hash -r 11:57:55 + load_set 11:57:55 + _setopts=hxB 11:57:55 ++ echo braceexpand:hashall:interactive-comments:xtrace 11:57:55 ++ tr : ' ' 11:57:55 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:57:55 + set +o braceexpand 11:57:55 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:57:55 + set +o hashall 11:57:55 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:57:55 + set +o interactive-comments 11:57:55 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:57:55 + set +o xtrace 11:57:55 ++ echo hxB 11:57:55 ++ sed 's/./& /g' 11:57:55 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:57:55 + set +h 11:57:55 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:57:55 + set +x 11:57:55 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 11:57:55 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 11:57:55 + export TEST_OPTIONS= 11:57:55 + TEST_OPTIONS= 11:57:55 ++ mktemp -d 11:57:55 + WORKDIR=/tmp/tmp.T4ASB2z6Jw 11:57:55 + cd /tmp/tmp.T4ASB2z6Jw 11:57:55 + docker login -u docker -p docker nexus3.onap.org:10001 11:57:55 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 11:57:55 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 11:57:55 Configure a credential helper to remove this warning. See 11:57:55 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 11:57:55 11:57:55 Login Succeeded 11:57:55 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 11:57:55 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 11:57:55 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 11:57:55 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 11:57:55 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 11:57:55 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 11:57:55 + relax_set 11:57:55 + set +e 11:57:55 + set +o pipefail 11:57:55 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 11:57:55 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 11:57:55 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 11:57:55 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 11:57:55 +++ GERRIT_BRANCH=master 11:57:55 +++ echo GERRIT_BRANCH=master 11:57:55 GERRIT_BRANCH=master 11:57:55 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 11:57:55 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 11:57:55 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 11:57:55 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 11:57:56 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 11:57:56 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 11:57:56 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 11:57:56 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 11:57:56 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 11:57:56 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 11:57:56 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 11:57:56 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 11:57:56 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 11:57:56 +++ grafana=false 11:57:56 +++ gui=false 11:57:56 +++ [[ 2 -gt 0 ]] 11:57:56 +++ key=apex-pdp 11:57:56 +++ case $key in 11:57:56 +++ echo apex-pdp 11:57:56 apex-pdp 11:57:56 +++ component=apex-pdp 11:57:56 +++ shift 11:57:56 +++ [[ 1 -gt 0 ]] 11:57:56 +++ key=--grafana 11:57:56 +++ case $key in 11:57:56 +++ grafana=true 11:57:56 +++ shift 11:57:56 +++ [[ 0 -gt 0 ]] 11:57:56 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 11:57:56 +++ echo 'Configuring docker compose...' 11:57:56 Configuring docker compose... 11:57:56 +++ source export-ports.sh 11:57:56 +++ source get-versions.sh 11:57:58 +++ '[' -z pap ']' 11:57:58 +++ '[' -n apex-pdp ']' 11:57:58 +++ '[' apex-pdp == logs ']' 11:57:58 +++ '[' true = true ']' 11:57:58 +++ echo 'Starting apex-pdp application with Grafana' 11:57:58 Starting apex-pdp application with Grafana 11:57:58 +++ docker-compose up -d apex-pdp grafana 11:57:59 Creating network "compose_default" with the default driver 11:57:59 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 11:57:59 latest: Pulling from prom/prometheus 11:58:03 Digest: sha256:beb5e30ffba08d9ae8a7961b9a2145fc8af6296ff2a4f463df7cd722fcbfc789 11:58:03 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 11:58:03 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 11:58:03 latest: Pulling from grafana/grafana 11:58:08 Digest: sha256:6b5b37eb35bbf30e7f64bd7f0fd41c0a5b7637f65d3bf93223b04a192b8bf3e2 11:58:08 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 11:58:08 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 11:58:08 10.10.2: Pulling from mariadb 11:58:13 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 11:58:13 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 11:58:13 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT)... 11:58:13 3.1.1-SNAPSHOT: Pulling from onap/policy-models-simulator 11:58:17 Digest: sha256:09b9abb94ede918d748d5f6ffece2e7592c9941527c37f3d00df286ee158ae05 11:58:17 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT 11:58:17 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 11:58:17 latest: Pulling from confluentinc/cp-zookeeper 11:58:29 Digest: sha256:000f1d11090f49fa8f67567e633bab4fea5dbd7d9119e7ee2ef259c509063593 11:58:29 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 11:58:29 Pulling kafka (confluentinc/cp-kafka:latest)... 11:58:30 latest: Pulling from confluentinc/cp-kafka 11:58:32 Digest: sha256:51145a40d23336a11085ca695d02bdeee66fe01b582837c6d223384952226be9 11:58:32 Status: Downloaded newer image for confluentinc/cp-kafka:latest 11:58:32 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT)... 11:58:34 3.1.1-SNAPSHOT: Pulling from onap/policy-db-migrator 11:58:49 Digest: sha256:611206351f1d7f71f498112d482be2423c80b29c75cff0383910ee3a4330e7b5 11:58:51 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT 11:58:52 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT)... 11:59:00 3.1.1-SNAPSHOT: Pulling from onap/policy-api 11:59:13 Digest: sha256:bbf3044dd101de99d940093be953f041397d02b2f17a70f8da7719c160735c2e 11:59:13 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT 11:59:13 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.1-SNAPSHOT)... 11:59:13 3.1.1-SNAPSHOT: Pulling from onap/policy-pap 11:59:16 Digest: sha256:8a0432281bb5edb6d25e3d0e62d78b6aebc2875f52ecd11259251b497208c04e 11:59:16 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.1-SNAPSHOT 11:59:16 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT)... 11:59:16 3.1.1-SNAPSHOT: Pulling from onap/policy-apex-pdp 11:59:23 Digest: sha256:0fdae8f3a73915cdeb896f38ac7d5b74e658832fd10929dcf3fe68219098b89b 11:59:23 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT 11:59:23 Creating simulator ... 11:59:23 Creating compose_zookeeper_1 ... 11:59:23 Creating mariadb ... 11:59:23 Creating prometheus ... 11:59:34 Creating mariadb ... done 11:59:34 Creating policy-db-migrator ... 11:59:34 Creating policy-db-migrator ... done 11:59:34 Creating policy-api ... 11:59:35 Creating prometheus ... done 11:59:35 Creating grafana ... 11:59:35 Creating policy-api ... done 11:59:36 Creating simulator ... done 11:59:37 Creating compose_zookeeper_1 ... done 11:59:37 Creating kafka ... 11:59:38 Creating kafka ... done 11:59:38 Creating policy-pap ... 11:59:39 Creating grafana ... done 11:59:40 Creating policy-pap ... done 11:59:40 Creating policy-apex-pdp ... 11:59:41 Creating policy-apex-pdp ... done 11:59:41 +++ echo 'Prometheus server: http://localhost:30259' 11:59:41 Prometheus server: http://localhost:30259 11:59:41 +++ echo 'Grafana server: http://localhost:30269' 11:59:41 Grafana server: http://localhost:30269 11:59:41 +++ cd /w/workspace/policy-pap-master-project-csit-pap 11:59:41 ++ sleep 10 11:59:51 ++ unset http_proxy https_proxy 11:59:51 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 11:59:51 Waiting for REST to come up on localhost port 30003... 11:59:51 NAMES STATUS 11:59:51 policy-apex-pdp Up 10 seconds 11:59:51 policy-pap Up 11 seconds 11:59:51 kafka Up 13 seconds 11:59:51 grafana Up 12 seconds 11:59:51 policy-api Up 15 seconds 11:59:51 prometheus Up 16 seconds 11:59:51 compose_zookeeper_1 Up 14 seconds 11:59:51 mariadb Up 17 seconds 11:59:51 simulator Up 15 seconds 11:59:56 NAMES STATUS 11:59:56 policy-apex-pdp Up 15 seconds 11:59:56 policy-pap Up 16 seconds 11:59:56 kafka Up 18 seconds 11:59:56 grafana Up 17 seconds 11:59:56 policy-api Up 20 seconds 11:59:56 prometheus Up 21 seconds 11:59:56 compose_zookeeper_1 Up 19 seconds 11:59:56 mariadb Up 22 seconds 11:59:56 simulator Up 20 seconds 12:00:01 NAMES STATUS 12:00:01 policy-apex-pdp Up 20 seconds 12:00:01 policy-pap Up 21 seconds 12:00:01 kafka Up 23 seconds 12:00:01 grafana Up 22 seconds 12:00:01 policy-api Up 25 seconds 12:00:01 prometheus Up 26 seconds 12:00:01 compose_zookeeper_1 Up 24 seconds 12:00:01 mariadb Up 27 seconds 12:00:01 simulator Up 25 seconds 12:00:06 NAMES STATUS 12:00:06 policy-apex-pdp Up 25 seconds 12:00:06 policy-pap Up 26 seconds 12:00:06 kafka Up 28 seconds 12:00:06 grafana Up 27 seconds 12:00:06 policy-api Up 30 seconds 12:00:06 prometheus Up 31 seconds 12:00:06 compose_zookeeper_1 Up 29 seconds 12:00:06 mariadb Up 32 seconds 12:00:06 simulator Up 30 seconds 12:00:11 NAMES STATUS 12:00:11 policy-apex-pdp Up 30 seconds 12:00:11 policy-pap Up 31 seconds 12:00:11 kafka Up 33 seconds 12:00:11 grafana Up 32 seconds 12:00:11 policy-api Up 35 seconds 12:00:11 prometheus Up 36 seconds 12:00:11 compose_zookeeper_1 Up 34 seconds 12:00:11 mariadb Up 37 seconds 12:00:11 simulator Up 35 seconds 12:00:16 NAMES STATUS 12:00:16 policy-apex-pdp Up 35 seconds 12:00:16 policy-pap Up 36 seconds 12:00:16 kafka Up 38 seconds 12:00:16 grafana Up 37 seconds 12:00:16 policy-api Up 40 seconds 12:00:16 prometheus Up 41 seconds 12:00:16 compose_zookeeper_1 Up 39 seconds 12:00:16 mariadb Up 42 seconds 12:00:16 simulator Up 40 seconds 12:00:16 ++ export 'SUITES=pap-test.robot 12:00:16 pap-slas.robot' 12:00:16 ++ SUITES='pap-test.robot 12:00:16 pap-slas.robot' 12:00:16 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 12:00:16 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 12:00:16 + load_set 12:00:16 + _setopts=hxB 12:00:16 ++ echo braceexpand:hashall:interactive-comments:xtrace 12:00:16 ++ tr : ' ' 12:00:16 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:00:16 + set +o braceexpand 12:00:16 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:00:16 + set +o hashall 12:00:16 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:00:16 + set +o interactive-comments 12:00:16 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:00:16 + set +o xtrace 12:00:16 ++ echo hxB 12:00:16 ++ sed 's/./& /g' 12:00:16 + for i in $(echo "$_setopts" | sed 's/./& /g') 12:00:16 + set +h 12:00:16 + for i in $(echo "$_setopts" | sed 's/./& /g') 12:00:16 + set +x 12:00:16 + docker_stats 12:00:16 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 12:00:16 ++ uname -s 12:00:16 + '[' Linux == Darwin ']' 12:00:16 + sh -c 'top -bn1 | head -3' 12:00:17 top - 12:00:17 up 5 min, 0 users, load average: 3.17, 1.48, 0.61 12:00:17 Tasks: 209 total, 1 running, 131 sleeping, 0 stopped, 0 zombie 12:00:17 %Cpu(s): 12.1 us, 2.5 sy, 0.0 ni, 79.2 id, 6.0 wa, 0.0 hi, 0.1 si, 0.1 st 12:00:17 + echo 12:00:17 12:00:17 + sh -c 'free -h' 12:00:17 total used free shared buff/cache available 12:00:17 Mem: 31G 2.7G 22G 1.3M 6.7G 28G 12:00:17 Swap: 1.0G 0B 1.0G 12:00:17 + echo 12:00:17 12:00:17 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 12:00:17 NAMES STATUS 12:00:17 policy-apex-pdp Up 35 seconds 12:00:17 policy-pap Up 36 seconds 12:00:17 kafka Up 38 seconds 12:00:17 grafana Up 37 seconds 12:00:17 policy-api Up 41 seconds 12:00:17 prometheus Up 41 seconds 12:00:17 compose_zookeeper_1 Up 40 seconds 12:00:17 mariadb Up 43 seconds 12:00:17 simulator Up 40 seconds 12:00:17 + echo 12:00:17 + docker stats --no-stream 12:00:17 12:00:19 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 12:00:19 45362a277fa0 policy-apex-pdp 1.93% 173.4MiB / 31.41GiB 0.54% 9.4kB / 8.72kB 0B / 0B 48 12:00:19 4d217cf2a7c4 policy-pap 13.16% 496.9MiB / 31.41GiB 1.54% 29.9kB / 32.5kB 0B / 180MB 62 12:00:19 451f6c43c6af kafka 9.94% 377.5MiB / 31.41GiB 1.17% 74.4kB / 77.1kB 0B / 508kB 83 12:00:19 7493f10c2d01 grafana 0.03% 54.32MiB / 31.41GiB 0.17% 19.1kB / 3.36kB 0B / 23.9MB 16 12:00:19 aff50ba937b3 policy-api 0.13% 515.2MiB / 31.41GiB 1.60% 1e+03kB / 710kB 0B / 0B 53 12:00:19 88819eafa12d prometheus 0.00% 17.96MiB / 31.41GiB 0.06% 1.55kB / 316B 0B / 0B 11 12:00:19 c66458784174 compose_zookeeper_1 0.12% 98.55MiB / 31.41GiB 0.31% 56.8kB / 50.5kB 127kB / 406kB 60 12:00:19 84ec1f810d50 mariadb 0.02% 101.8MiB / 31.41GiB 0.32% 996kB / 1.19MB 11MB / 68.2MB 37 12:00:19 20d1574c8bca simulator 0.08% 123.9MiB / 31.41GiB 0.39% 1.23kB / 0B 0B / 0B 76 12:00:19 + echo 12:00:19 12:00:19 + cd /tmp/tmp.T4ASB2z6Jw 12:00:19 + echo 'Reading the testplan:' 12:00:19 Reading the testplan: 12:00:19 + echo 'pap-test.robot 12:00:19 pap-slas.robot' 12:00:19 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 12:00:19 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 12:00:19 + cat testplan.txt 12:00:19 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 12:00:19 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 12:00:19 ++ xargs 12:00:19 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 12:00:19 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 12:00:19 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 12:00:19 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 12:00:19 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 12:00:19 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 12:00:19 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 12:00:19 + relax_set 12:00:19 + set +e 12:00:19 + set +o pipefail 12:00:19 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 12:00:20 ============================================================================== 12:00:20 pap 12:00:20 ============================================================================== 12:00:20 pap.Pap-Test 12:00:20 ============================================================================== 12:00:21 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 12:00:21 ------------------------------------------------------------------------------ 12:00:21 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 12:00:21 ------------------------------------------------------------------------------ 12:00:22 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 12:00:22 ------------------------------------------------------------------------------ 12:00:22 Healthcheck :: Verify policy pap health check | PASS | 12:00:22 ------------------------------------------------------------------------------ 12:00:42 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 12:00:42 ------------------------------------------------------------------------------ 12:00:43 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 12:00:43 ------------------------------------------------------------------------------ 12:00:43 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 12:00:43 ------------------------------------------------------------------------------ 12:00:43 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 12:00:43 ------------------------------------------------------------------------------ 12:00:44 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 12:00:44 ------------------------------------------------------------------------------ 12:00:44 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 12:00:44 ------------------------------------------------------------------------------ 12:00:44 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 12:00:44 ------------------------------------------------------------------------------ 12:00:44 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 12:00:44 ------------------------------------------------------------------------------ 12:00:45 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 12:00:45 ------------------------------------------------------------------------------ 12:00:45 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 12:00:45 ------------------------------------------------------------------------------ 12:00:45 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 12:00:45 ------------------------------------------------------------------------------ 12:00:45 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 12:00:45 ------------------------------------------------------------------------------ 12:00:46 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 12:00:46 ------------------------------------------------------------------------------ 12:01:06 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 12:01:06 ------------------------------------------------------------------------------ 12:01:06 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 12:01:06 ------------------------------------------------------------------------------ 12:01:06 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 12:01:06 ------------------------------------------------------------------------------ 12:01:06 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 12:01:06 ------------------------------------------------------------------------------ 12:01:06 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 12:01:06 ------------------------------------------------------------------------------ 12:01:06 pap.Pap-Test | PASS | 12:01:06 22 tests, 22 passed, 0 failed 12:01:06 ============================================================================== 12:01:06 pap.Pap-Slas 12:01:06 ============================================================================== 12:02:06 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 12:02:06 ------------------------------------------------------------------------------ 12:02:06 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 12:02:06 ------------------------------------------------------------------------------ 12:02:06 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 12:02:06 ------------------------------------------------------------------------------ 12:02:06 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 12:02:06 ------------------------------------------------------------------------------ 12:02:07 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 12:02:07 ------------------------------------------------------------------------------ 12:02:07 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 12:02:07 ------------------------------------------------------------------------------ 12:02:07 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 12:02:07 ------------------------------------------------------------------------------ 12:02:07 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 12:02:07 ------------------------------------------------------------------------------ 12:02:07 pap.Pap-Slas | PASS | 12:02:07 8 tests, 8 passed, 0 failed 12:02:07 ============================================================================== 12:02:07 pap | PASS | 12:02:07 30 tests, 30 passed, 0 failed 12:02:07 ============================================================================== 12:02:07 Output: /tmp/tmp.T4ASB2z6Jw/output.xml 12:02:07 Log: /tmp/tmp.T4ASB2z6Jw/log.html 12:02:07 Report: /tmp/tmp.T4ASB2z6Jw/report.html 12:02:07 + RESULT=0 12:02:07 + load_set 12:02:07 + _setopts=hxB 12:02:07 ++ echo braceexpand:hashall:interactive-comments:xtrace 12:02:07 ++ tr : ' ' 12:02:07 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:02:07 + set +o braceexpand 12:02:07 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:02:07 + set +o hashall 12:02:07 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:02:07 + set +o interactive-comments 12:02:07 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:02:07 + set +o xtrace 12:02:07 ++ echo hxB 12:02:07 ++ sed 's/./& /g' 12:02:07 + for i in $(echo "$_setopts" | sed 's/./& /g') 12:02:07 + set +h 12:02:07 + for i in $(echo "$_setopts" | sed 's/./& /g') 12:02:07 + set +x 12:02:07 + echo 'RESULT: 0' 12:02:07 RESULT: 0 12:02:07 + exit 0 12:02:07 + on_exit 12:02:07 + rc=0 12:02:07 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 12:02:07 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 12:02:07 NAMES STATUS 12:02:07 policy-apex-pdp Up 2 minutes 12:02:07 policy-pap Up 2 minutes 12:02:07 kafka Up 2 minutes 12:02:07 grafana Up 2 minutes 12:02:07 policy-api Up 2 minutes 12:02:07 prometheus Up 2 minutes 12:02:07 compose_zookeeper_1 Up 2 minutes 12:02:07 mariadb Up 2 minutes 12:02:07 simulator Up 2 minutes 12:02:07 + docker_stats 12:02:07 ++ uname -s 12:02:07 + '[' Linux == Darwin ']' 12:02:07 + sh -c 'top -bn1 | head -3' 12:02:07 top - 12:02:07 up 7 min, 0 users, load average: 0.68, 1.14, 0.59 12:02:07 Tasks: 198 total, 1 running, 129 sleeping, 0 stopped, 0 zombie 12:02:07 %Cpu(s): 10.1 us, 2.0 sy, 0.0 ni, 83.2 id, 4.7 wa, 0.0 hi, 0.1 si, 0.1 st 12:02:07 + echo 12:02:07 12:02:07 + sh -c 'free -h' 12:02:07 total used free shared buff/cache available 12:02:07 Mem: 31G 2.7G 22G 1.3M 6.7G 28G 12:02:07 Swap: 1.0G 0B 1.0G 12:02:07 + echo 12:02:07 12:02:07 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 12:02:07 NAMES STATUS 12:02:07 policy-apex-pdp Up 2 minutes 12:02:07 policy-pap Up 2 minutes 12:02:07 kafka Up 2 minutes 12:02:07 grafana Up 2 minutes 12:02:07 policy-api Up 2 minutes 12:02:07 prometheus Up 2 minutes 12:02:07 compose_zookeeper_1 Up 2 minutes 12:02:07 mariadb Up 2 minutes 12:02:07 simulator Up 2 minutes 12:02:07 + echo 12:02:07 12:02:07 + docker stats --no-stream 12:02:10 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 12:02:10 45362a277fa0 policy-apex-pdp 0.42% 185.1MiB / 31.41GiB 0.58% 57.2kB / 91.8kB 0B / 0B 50 12:02:10 4d217cf2a7c4 policy-pap 0.56% 501.8MiB / 31.41GiB 1.56% 2.33MB / 811kB 0B / 180MB 65 12:02:10 451f6c43c6af kafka 1.27% 384.4MiB / 31.41GiB 1.19% 243kB / 218kB 0B / 606kB 83 12:02:10 7493f10c2d01 grafana 0.03% 56.5MiB / 31.41GiB 0.18% 20.1kB / 4.49kB 0B / 23.9MB 16 12:02:10 aff50ba937b3 policy-api 0.10% 516.8MiB / 31.41GiB 1.61% 2.49MB / 1.26MB 0B / 0B 54 12:02:10 88819eafa12d prometheus 0.00% 24.14MiB / 31.41GiB 0.08% 184kB / 10.9kB 0B / 0B 11 12:02:10 c66458784174 compose_zookeeper_1 0.19% 99MiB / 31.41GiB 0.31% 59.8kB / 52.1kB 127kB / 406kB 60 12:02:10 84ec1f810d50 mariadb 0.01% 103.1MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11MB / 68.5MB 28 12:02:10 20d1574c8bca simulator 0.06% 123.8MiB / 31.41GiB 0.38% 1.5kB / 0B 0B / 0B 76 12:02:10 + echo 12:02:10 12:02:10 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 12:02:10 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 12:02:10 + relax_set 12:02:10 + set +e 12:02:10 + set +o pipefail 12:02:10 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 12:02:10 ++ echo 'Shut down started!' 12:02:10 Shut down started! 12:02:10 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 12:02:10 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 12:02:10 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 12:02:10 ++ source export-ports.sh 12:02:10 ++ source get-versions.sh 12:02:12 ++ echo 'Collecting logs from docker compose containers...' 12:02:12 Collecting logs from docker compose containers... 12:02:12 ++ docker-compose logs 12:02:14 ++ cat docker_compose.log 12:02:14 Attaching to policy-apex-pdp, policy-pap, kafka, grafana, policy-api, policy-db-migrator, prometheus, compose_zookeeper_1, mariadb, simulator 12:02:14 mariadb | 2024-01-23 11:59:34+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 12:02:14 mariadb | 2024-01-23 11:59:34+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 12:02:14 mariadb | 2024-01-23 11:59:34+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 12:02:14 mariadb | 2024-01-23 11:59:34+00:00 [Note] [Entrypoint]: Initializing database files 12:02:14 mariadb | 2024-01-23 11:59:34 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 12:02:14 mariadb | 2024-01-23 11:59:34 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 12:02:14 mariadb | 2024-01-23 11:59:34 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 12:02:14 mariadb | 12:02:14 mariadb | 12:02:14 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 12:02:14 mariadb | To do so, start the server, then issue the following command: 12:02:14 mariadb | 12:02:14 mariadb | '/usr/bin/mysql_secure_installation' 12:02:14 mariadb | 12:02:14 mariadb | which will also give you the option of removing the test 12:02:14 mariadb | databases and anonymous user created by default. This is 12:02:14 mariadb | strongly recommended for production servers. 12:02:14 mariadb | 12:02:14 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 12:02:14 mariadb | 12:02:14 mariadb | Please report any problems at https://mariadb.org/jira 12:02:14 mariadb | 12:02:14 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 12:02:14 mariadb | 12:02:14 mariadb | Consider joining MariaDB's strong and vibrant community: 12:02:14 mariadb | https://mariadb.org/get-involved/ 12:02:14 mariadb | 12:02:14 mariadb | 2024-01-23 11:59:36+00:00 [Note] [Entrypoint]: Database files initialized 12:02:14 mariadb | 2024-01-23 11:59:36+00:00 [Note] [Entrypoint]: Starting temporary server 12:02:14 mariadb | 2024-01-23 11:59:36+00:00 [Note] [Entrypoint]: Waiting for server startup 12:02:14 mariadb | 2024-01-23 11:59:36 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 93 ... 12:02:14 mariadb | 2024-01-23 11:59:36 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 12:02:14 mariadb | 2024-01-23 11:59:36 0 [Note] InnoDB: Number of transaction pools: 1 12:02:14 mariadb | 2024-01-23 11:59:36 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 12:02:14 mariadb | 2024-01-23 11:59:36 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 12:02:14 mariadb | 2024-01-23 11:59:36 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 12:02:14 mariadb | 2024-01-23 11:59:36 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 12:02:14 mariadb | 2024-01-23 11:59:36 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 12:02:14 mariadb | 2024-01-23 11:59:36 0 [Note] InnoDB: Completed initialization of buffer pool 12:02:14 policy-api | Waiting for mariadb port 3306... 12:02:14 policy-api | mariadb (172.17.0.3:3306) open 12:02:14 policy-api | Waiting for policy-db-migrator port 6824... 12:02:14 policy-api | policy-db-migrator (172.17.0.6:6824) open 12:02:14 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 12:02:14 policy-api | 12:02:14 policy-api | . ____ _ __ _ _ 12:02:14 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 12:02:14 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 12:02:14 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 12:02:14 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 12:02:14 policy-api | =========|_|==============|___/=/_/_/_/ 12:02:14 mariadb | 2024-01-23 11:59:36 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 12:02:14 mariadb | 2024-01-23 11:59:36 0 [Note] InnoDB: 128 rollback segments are active. 12:02:14 mariadb | 2024-01-23 11:59:36 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 12:02:14 mariadb | 2024-01-23 11:59:36 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 12:02:14 mariadb | 2024-01-23 11:59:36 0 [Note] InnoDB: log sequence number 46590; transaction id 14 12:02:14 mariadb | 2024-01-23 11:59:36 0 [Note] Plugin 'FEEDBACK' is disabled. 12:02:14 mariadb | 2024-01-23 11:59:36 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 12:02:14 mariadb | 2024-01-23 11:59:36 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 12:02:14 mariadb | 2024-01-23 11:59:36 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 12:02:14 mariadb | 2024-01-23 11:59:36 0 [Note] mariadbd: ready for connections. 12:02:14 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 12:02:14 mariadb | 2024-01-23 11:59:37+00:00 [Note] [Entrypoint]: Temporary server started. 12:02:14 mariadb | 2024-01-23 11:59:38+00:00 [Note] [Entrypoint]: Creating user policy_user 12:02:14 mariadb | 2024-01-23 11:59:38+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 12:02:14 mariadb | 12:02:14 mariadb | 2024-01-23 11:59:39+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 12:02:14 mariadb | 12:02:14 mariadb | 2024-01-23 11:59:39+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 12:02:14 mariadb | #!/bin/bash -xv 12:02:14 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 12:02:14 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 12:02:14 mariadb | # 12:02:14 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 12:02:14 mariadb | # you may not use this file except in compliance with the License. 12:02:14 mariadb | # You may obtain a copy of the License at 12:02:14 mariadb | # 12:02:14 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 12:02:14 mariadb | # 12:02:14 mariadb | # Unless required by applicable law or agreed to in writing, software 12:02:14 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 12:02:14 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12:02:14 mariadb | # See the License for the specific language governing permissions and 12:02:14 mariadb | # limitations under the License. 12:02:14 mariadb | 12:02:14 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 12:02:14 mariadb | do 12:02:14 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 12:02:14 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 12:02:14 mariadb | done 12:02:14 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 12:02:14 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 12:02:14 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 12:02:14 zookeeper_1 | ===> User 12:02:14 zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 12:02:14 zookeeper_1 | ===> Configuring ... 12:02:14 zookeeper_1 | ===> Running preflight checks ... 12:02:14 zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... 12:02:14 zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... 12:02:14 zookeeper_1 | ===> Launching ... 12:02:14 zookeeper_1 | ===> Launching zookeeper ... 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,867] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,880] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,880] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,880] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,880] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,882] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,882] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,882] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,882] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,884] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,884] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,884] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,884] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,884] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,884] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,885] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,896] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@5fa07e12 (org.apache.zookeeper.server.ServerMetrics) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,898] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,899] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,901] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,911] INFO (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,911] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,911] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,911] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,911] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,911] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,911] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 12:02:14 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 12:02:14 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 12:02:14 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 12:02:14 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 12:02:14 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 12:02:14 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 12:02:14 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 12:02:14 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 12:02:14 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 12:02:14 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 12:02:14 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 12:02:14 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 12:02:14 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 12:02:14 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 12:02:14 mariadb | 12:02:14 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 12:02:14 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 12:02:14 policy-apex-pdp | Waiting for mariadb port 3306... 12:02:14 policy-apex-pdp | mariadb (172.17.0.3:3306) open 12:02:14 policy-apex-pdp | Waiting for kafka port 9092... 12:02:14 policy-apex-pdp | Waiting for pap port 6969... 12:02:14 policy-apex-pdp | kafka (172.17.0.9:9092) open 12:02:14 policy-apex-pdp | pap (172.17.0.10:6969) open 12:02:14 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.252+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.427+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 12:02:14 policy-apex-pdp | allow.auto.create.topics = true 12:02:14 policy-apex-pdp | auto.commit.interval.ms = 5000 12:02:14 policy-apex-pdp | auto.include.jmx.reporter = true 12:02:14 policy-apex-pdp | auto.offset.reset = latest 12:02:14 policy-apex-pdp | bootstrap.servers = [kafka:9092] 12:02:14 policy-apex-pdp | check.crcs = true 12:02:14 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 12:02:14 policy-apex-pdp | client.id = consumer-5e219e28-7118-417e-b91d-edf2321c7473-1 12:02:14 policy-apex-pdp | client.rack = 12:02:14 policy-apex-pdp | connections.max.idle.ms = 540000 12:02:14 policy-apex-pdp | default.api.timeout.ms = 60000 12:02:14 policy-apex-pdp | enable.auto.commit = true 12:02:14 policy-apex-pdp | exclude.internal.topics = true 12:02:14 policy-apex-pdp | fetch.max.bytes = 52428800 12:02:14 policy-apex-pdp | fetch.max.wait.ms = 500 12:02:14 policy-apex-pdp | fetch.min.bytes = 1 12:02:14 policy-apex-pdp | group.id = 5e219e28-7118-417e-b91d-edf2321c7473 12:02:14 policy-apex-pdp | group.instance.id = null 12:02:14 policy-apex-pdp | heartbeat.interval.ms = 3000 12:02:14 policy-apex-pdp | interceptor.classes = [] 12:02:14 policy-apex-pdp | internal.leave.group.on.close = true 12:02:14 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 12:02:14 policy-apex-pdp | isolation.level = read_uncommitted 12:02:14 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 12:02:14 policy-apex-pdp | max.partition.fetch.bytes = 1048576 12:02:14 policy-apex-pdp | max.poll.interval.ms = 300000 12:02:14 policy-apex-pdp | max.poll.records = 500 12:02:14 policy-apex-pdp | metadata.max.age.ms = 300000 12:02:14 policy-apex-pdp | metric.reporters = [] 12:02:14 policy-apex-pdp | metrics.num.samples = 2 12:02:14 policy-apex-pdp | metrics.recording.level = INFO 12:02:14 policy-apex-pdp | metrics.sample.window.ms = 30000 12:02:14 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 12:02:14 policy-apex-pdp | receive.buffer.bytes = 65536 12:02:14 policy-apex-pdp | reconnect.backoff.max.ms = 1000 12:02:14 policy-apex-pdp | reconnect.backoff.ms = 50 12:02:14 policy-apex-pdp | request.timeout.ms = 30000 12:02:14 policy-apex-pdp | retry.backoff.ms = 100 12:02:14 policy-apex-pdp | sasl.client.callback.handler.class = null 12:02:14 policy-apex-pdp | sasl.jaas.config = null 12:02:14 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 12:02:14 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 12:02:14 policy-apex-pdp | sasl.kerberos.service.name = null 12:02:14 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 12:02:14 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 12:02:14 policy-apex-pdp | sasl.login.callback.handler.class = null 12:02:14 policy-apex-pdp | sasl.login.class = null 12:02:14 policy-apex-pdp | sasl.login.connect.timeout.ms = null 12:02:14 grafana | logger=settings t=2024-01-23T11:59:39.331008745Z level=info msg="Starting Grafana" version=10.2.3 commit=1e84fede543acc892d2a2515187e545eb047f237 branch=HEAD compiled=2023-12-18T15:46:07Z 12:02:14 grafana | logger=settings t=2024-01-23T11:59:39.331371303Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 12:02:14 grafana | logger=settings t=2024-01-23T11:59:39.331429446Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 12:02:14 grafana | logger=settings t=2024-01-23T11:59:39.331454648Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 12:02:14 grafana | logger=settings t=2024-01-23T11:59:39.33150014Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 12:02:14 grafana | logger=settings t=2024-01-23T11:59:39.331536452Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 12:02:14 grafana | logger=settings t=2024-01-23T11:59:39.331598725Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 12:02:14 grafana | logger=settings t=2024-01-23T11:59:39.331624416Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 12:02:14 grafana | logger=settings t=2024-01-23T11:59:39.331667438Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 12:02:14 grafana | logger=settings t=2024-01-23T11:59:39.331723001Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 12:02:14 grafana | logger=settings t=2024-01-23T11:59:39.331749783Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 12:02:14 grafana | logger=settings t=2024-01-23T11:59:39.331836027Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 12:02:14 grafana | logger=settings t=2024-01-23T11:59:39.331861668Z level=info msg=Target target=[all] 12:02:14 grafana | logger=settings t=2024-01-23T11:59:39.331953403Z level=info msg="Path Home" path=/usr/share/grafana 12:02:14 grafana | logger=settings t=2024-01-23T11:59:39.331989325Z level=info msg="Path Data" path=/var/lib/grafana 12:02:14 grafana | logger=settings t=2024-01-23T11:59:39.332062728Z level=info msg="Path Logs" path=/var/log/grafana 12:02:14 grafana | logger=settings t=2024-01-23T11:59:39.33208993Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 12:02:14 grafana | logger=settings t=2024-01-23T11:59:39.332161604Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 12:02:14 grafana | logger=settings t=2024-01-23T11:59:39.332236827Z level=info msg="App mode production" 12:02:14 grafana | logger=sqlstore t=2024-01-23T11:59:39.332656809Z level=info msg="Connecting to DB" dbtype=sqlite3 12:02:14 grafana | logger=sqlstore t=2024-01-23T11:59:39.332723912Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.333613857Z level=info msg="Starting DB migrations" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.334824869Z level=info msg="Executing migration" id="create migration_log table" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.335677232Z level=info msg="Migration successfully executed" id="create migration_log table" duration=851.883µs 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.393940118Z level=info msg="Executing migration" id="create user table" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.395270846Z level=info msg="Migration successfully executed" id="create user table" duration=1.330058ms 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.400374736Z level=info msg="Executing migration" id="add unique index user.login" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.401219659Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=868.704µs 12:02:14 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 12:02:14 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 12:02:14 mariadb | 12:02:14 mariadb | 2024-01-23 11:59:39+00:00 [Note] [Entrypoint]: Stopping temporary server 12:02:14 mariadb | 2024-01-23 11:59:39 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 12:02:14 mariadb | 2024-01-23 11:59:39 0 [Note] InnoDB: FTS optimize thread exiting. 12:02:14 mariadb | 2024-01-23 11:59:39 0 [Note] InnoDB: Starting shutdown... 12:02:14 mariadb | 2024-01-23 11:59:39 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 12:02:14 mariadb | 2024-01-23 11:59:39 0 [Note] InnoDB: Buffer pool(s) dump completed at 240123 11:59:39 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: Shutdown completed; log sequence number 330365; transaction id 298 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Note] mariadbd: Shutdown complete 12:02:14 mariadb | 12:02:14 mariadb | 2024-01-23 11:59:40+00:00 [Note] [Entrypoint]: Temporary server stopped 12:02:14 mariadb | 12:02:14 mariadb | 2024-01-23 11:59:40+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 12:02:14 mariadb | 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: Number of transaction pools: 1 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: Completed initialization of buffer pool 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: 128 rollback segments are active. 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: log sequence number 330365; transaction id 299 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Note] Plugin 'FEEDBACK' is disabled. 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Note] Server socket created on IP: '0.0.0.0'. 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Note] Server socket created on IP: '::'. 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Note] mariadbd: ready for connections. 12:02:14 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 12:02:14 mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: Buffer pool(s) load completed at 240123 11:59:40 12:02:14 mariadb | 2024-01-23 11:59:40 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) 12:02:14 mariadb | 2024-01-23 11:59:40 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) 12:02:14 mariadb | 2024-01-23 11:59:41 25 [Warning] Aborted connection 25 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 12:02:14 mariadb | 2024-01-23 11:59:41 32 [Warning] Aborted connection 32 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 12:02:14 policy-apex-pdp | sasl.login.read.timeout.ms = null 12:02:14 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 12:02:14 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 12:02:14 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 12:02:14 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 12:02:14 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 12:02:14 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 12:02:14 policy-apex-pdp | sasl.mechanism = GSSAPI 12:02:14 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 12:02:14 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 12:02:14 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 12:02:14 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 12:02:14 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 12:02:14 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 12:02:14 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 12:02:14 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 12:02:14 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 12:02:14 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 12:02:14 policy-apex-pdp | security.protocol = PLAINTEXT 12:02:14 policy-apex-pdp | security.providers = null 12:02:14 policy-apex-pdp | send.buffer.bytes = 131072 12:02:14 policy-apex-pdp | session.timeout.ms = 45000 12:02:14 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 12:02:14 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 12:02:14 policy-apex-pdp | ssl.cipher.suites = null 12:02:14 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 12:02:14 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 12:02:14 policy-apex-pdp | ssl.engine.factory.class = null 12:02:14 policy-apex-pdp | ssl.key.password = null 12:02:14 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 12:02:14 policy-apex-pdp | ssl.keystore.certificate.chain = null 12:02:14 policy-apex-pdp | ssl.keystore.key = null 12:02:14 policy-apex-pdp | ssl.keystore.location = null 12:02:14 policy-apex-pdp | ssl.keystore.password = null 12:02:14 policy-apex-pdp | ssl.keystore.type = JKS 12:02:14 policy-apex-pdp | ssl.protocol = TLSv1.3 12:02:14 policy-apex-pdp | ssl.provider = null 12:02:14 policy-apex-pdp | ssl.secure.random.implementation = null 12:02:14 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 12:02:14 policy-apex-pdp | ssl.truststore.certificates = null 12:02:14 policy-apex-pdp | ssl.truststore.location = null 12:02:14 policy-apex-pdp | ssl.truststore.password = null 12:02:14 policy-apex-pdp | ssl.truststore.type = JKS 12:02:14 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 12:02:14 policy-apex-pdp | 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.573+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.573+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.573+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706011214572 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.576+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-1, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Subscribed to topic(s): policy-pdp-pap 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.589+00:00|INFO|ServiceManager|main] service manager starting 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.589+00:00|INFO|ServiceManager|main] service manager starting topics 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.595+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5e219e28-7118-417e-b91d-edf2321c7473, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.624+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 12:02:14 policy-apex-pdp | allow.auto.create.topics = true 12:02:14 policy-apex-pdp | auto.commit.interval.ms = 5000 12:02:14 policy-apex-pdp | auto.include.jmx.reporter = true 12:02:14 policy-apex-pdp | auto.offset.reset = latest 12:02:14 policy-apex-pdp | bootstrap.servers = [kafka:9092] 12:02:14 policy-apex-pdp | check.crcs = true 12:02:14 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 12:02:14 policy-apex-pdp | client.id = consumer-5e219e28-7118-417e-b91d-edf2321c7473-2 12:02:14 policy-apex-pdp | client.rack = 12:02:14 kafka | ===> User 12:02:14 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 12:02:14 kafka | ===> Configuring ... 12:02:14 kafka | Running in Zookeeper mode... 12:02:14 kafka | ===> Running preflight checks ... 12:02:14 kafka | ===> Check if /var/lib/kafka/data is writable ... 12:02:14 kafka | ===> Check if Zookeeper is healthy ... 12:02:14 kafka | [2024-01-23 11:59:42,081] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:42,081] INFO Client environment:host.name=451f6c43c6af (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:42,081] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:42,081] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:42,081] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:42,081] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-metadata-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/kafka_2.13-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/kafka-raft-7.5.3-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.5.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.5.3.jar:/usr/share/java/cp-base-new/kafka-storage-7.5.3-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.5.3-ccs.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.5.3.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:42,081] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:42,082] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:42,082] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:42,082] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:42,082] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:42,082] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:42,082] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:42,082] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:42,082] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:42,082] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:42,082] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:42,082] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:42,085] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@62bd765 (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:42,089] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 12:02:14 kafka | [2024-01-23 11:59:42,093] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 12:02:14 kafka | [2024-01-23 11:59:42,100] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,911] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,911] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,911] INFO (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,912] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,912] INFO Server environment:host.name=c66458784174 (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,912] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,912] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,912] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 policy-api | :: Spring Boot :: (v3.1.4) 12:02:14 policy-api | 12:02:14 policy-api | [2024-01-23T11:59:50.405+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.9 with PID 23 (/app/api.jar started by policy in /opt/app/policy/api/bin) 12:02:14 policy-api | [2024-01-23T11:59:50.406+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 12:02:14 policy-api | [2024-01-23T11:59:52.196+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 12:02:14 policy-api | [2024-01-23T11:59:52.286+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 80 ms. Found 6 JPA repository interfaces. 12:02:14 policy-api | [2024-01-23T11:59:52.694+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 12:02:14 policy-api | [2024-01-23T11:59:52.695+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 12:02:14 policy-api | [2024-01-23T11:59:53.355+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 12:02:14 policy-api | [2024-01-23T11:59:53.364+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 12:02:14 policy-api | [2024-01-23T11:59:53.366+00:00|INFO|StandardService|main] Starting service [Tomcat] 12:02:14 policy-api | [2024-01-23T11:59:53.366+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] 12:02:14 policy-api | [2024-01-23T11:59:53.452+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 12:02:14 policy-api | [2024-01-23T11:59:53.452+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2978 ms 12:02:14 policy-api | [2024-01-23T11:59:53.902+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 12:02:14 policy-api | [2024-01-23T11:59:53.984+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 12:02:14 policy-api | [2024-01-23T11:59:53.987+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 12:02:14 policy-api | [2024-01-23T11:59:54.035+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 12:02:14 policy-api | [2024-01-23T11:59:54.404+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 12:02:14 policy-api | [2024-01-23T11:59:54.432+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 12:02:14 policy-api | [2024-01-23T11:59:54.548+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@3d37203b 12:02:14 policy-api | [2024-01-23T11:59:54.550+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 12:02:14 policy-api | [2024-01-23T11:59:54.577+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) 12:02:14 policy-api | [2024-01-23T11:59:54.578+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead 12:02:14 policy-api | [2024-01-23T11:59:56.385+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 12:02:14 policy-api | [2024-01-23T11:59:56.389+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 12:02:14 policy-api | [2024-01-23T11:59:57.679+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 12:02:14 policy-api | [2024-01-23T11:59:58.448+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 12:02:14 policy-api | [2024-01-23T11:59:59.624+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 12:02:14 policy-api | [2024-01-23T11:59:59.839+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@3005133e, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@19a7e618, org.springframework.security.web.context.SecurityContextHolderFilter@2542d320, org.springframework.security.web.header.HeaderWriterFilter@39d666e0, org.springframework.security.web.authentication.logout.LogoutFilter@4295b0b8, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@4bbb00a4, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@66161fee, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@67127bb1, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@22ccd80f, org.springframework.security.web.access.ExceptionTranslationFilter@5f160f9c, org.springframework.security.web.access.intercept.AuthorizationFilter@6f3a8d5e] 12:02:14 policy-api | [2024-01-23T12:00:00.690+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 12:02:14 policy-api | [2024-01-23T12:00:00.747+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 12:02:14 policy-api | [2024-01-23T12:00:00.770+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 12:02:14 policy-api | [2024-01-23T12:00:00.791+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.209 seconds (process running for 11.824) 12:02:14 policy-api | [2024-01-23T12:00:20.145+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 12:02:14 policy-api | [2024-01-23T12:00:20.145+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 12:02:14 policy-api | [2024-01-23T12:00:20.147+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 12:02:14 policy-api | [2024-01-23T12:00:20.412+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: 12:02:14 policy-api | [] 12:02:14 kafka | [2024-01-23 11:59:42,126] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) 12:02:14 kafka | [2024-01-23 11:59:42,126] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 12:02:14 kafka | [2024-01-23 11:59:42,134] INFO Socket connection established, initiating session, client: /172.17.0.9:56920, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) 12:02:14 kafka | [2024-01-23 11:59:42,165] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x10000043fa10000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 12:02:14 kafka | [2024-01-23 11:59:42,286] INFO Session: 0x10000043fa10000 closed (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:42,287] INFO EventThread shut down for session: 0x10000043fa10000 (org.apache.zookeeper.ClientCnxn) 12:02:14 kafka | Using log4j config /etc/kafka/log4j.properties 12:02:14 kafka | ===> Launching ... 12:02:14 kafka | ===> Launching kafka ... 12:02:14 kafka | [2024-01-23 11:59:43,006] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 12:02:14 kafka | [2024-01-23 11:59:43,333] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 12:02:14 kafka | [2024-01-23 11:59:43,403] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 12:02:14 kafka | [2024-01-23 11:59:43,404] INFO starting (kafka.server.KafkaServer) 12:02:14 kafka | [2024-01-23 11:59:43,405] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 12:02:14 kafka | [2024-01-23 11:59:43,419] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 12:02:14 kafka | [2024-01-23 11:59:43,424] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:43,424] INFO Client environment:host.name=451f6c43c6af (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:43,424] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:43,424] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:43,424] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:43,424] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:43,424] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:43,424] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:43,424] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:43,424] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:43,424] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:43,424] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:43,424] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:43,424] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:43,424] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:43,424] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) 12:02:14 policy-apex-pdp | connections.max.idle.ms = 540000 12:02:14 policy-apex-pdp | default.api.timeout.ms = 60000 12:02:14 policy-apex-pdp | enable.auto.commit = true 12:02:14 policy-apex-pdp | exclude.internal.topics = true 12:02:14 policy-apex-pdp | fetch.max.bytes = 52428800 12:02:14 policy-apex-pdp | fetch.max.wait.ms = 500 12:02:14 policy-apex-pdp | fetch.min.bytes = 1 12:02:14 policy-apex-pdp | group.id = 5e219e28-7118-417e-b91d-edf2321c7473 12:02:14 policy-apex-pdp | group.instance.id = null 12:02:14 policy-apex-pdp | heartbeat.interval.ms = 3000 12:02:14 policy-apex-pdp | interceptor.classes = [] 12:02:14 policy-apex-pdp | internal.leave.group.on.close = true 12:02:14 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 12:02:14 policy-apex-pdp | isolation.level = read_uncommitted 12:02:14 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 12:02:14 policy-apex-pdp | max.partition.fetch.bytes = 1048576 12:02:14 policy-apex-pdp | max.poll.interval.ms = 300000 12:02:14 policy-apex-pdp | max.poll.records = 500 12:02:14 policy-apex-pdp | metadata.max.age.ms = 300000 12:02:14 policy-apex-pdp | metric.reporters = [] 12:02:14 policy-apex-pdp | metrics.num.samples = 2 12:02:14 policy-apex-pdp | metrics.recording.level = INFO 12:02:14 policy-apex-pdp | metrics.sample.window.ms = 30000 12:02:14 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 12:02:14 policy-apex-pdp | receive.buffer.bytes = 65536 12:02:14 policy-apex-pdp | reconnect.backoff.max.ms = 1000 12:02:14 policy-apex-pdp | reconnect.backoff.ms = 50 12:02:14 policy-apex-pdp | request.timeout.ms = 30000 12:02:14 policy-apex-pdp | retry.backoff.ms = 100 12:02:14 policy-apex-pdp | sasl.client.callback.handler.class = null 12:02:14 policy-apex-pdp | sasl.jaas.config = null 12:02:14 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 12:02:14 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 12:02:14 policy-apex-pdp | sasl.kerberos.service.name = null 12:02:14 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 12:02:14 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 12:02:14 policy-apex-pdp | sasl.login.callback.handler.class = null 12:02:14 policy-apex-pdp | sasl.login.class = null 12:02:14 policy-apex-pdp | sasl.login.connect.timeout.ms = null 12:02:14 policy-apex-pdp | sasl.login.read.timeout.ms = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.408868738Z level=info msg="Executing migration" id="add unique index user.email" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.410018506Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.148508ms 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.415901786Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.417063945Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.162129ms 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.423942975Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.42542083Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.478025ms 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.431059607Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.434239589Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.179602ms 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.440754361Z level=info msg="Executing migration" id="create user table v2" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.441620795Z level=info msg="Migration successfully executed" id="create user table v2" duration=866.414µs 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.44702511Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.448307995Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.282735ms 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.453527281Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.4548894Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.361909ms 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.460202891Z level=info msg="Executing migration" id="copy data_source v1 to v2" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.460717197Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=514.386µs 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.465243327Z level=info msg="Executing migration" id="Drop old table user_v1" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.466213217Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=969.4µs 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.471437973Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.473439225Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.994792ms 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.476853938Z level=info msg="Executing migration" id="Update user table charset" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.476989925Z level=info msg="Migration successfully executed" id="Update user table charset" duration=135.557µs 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.484607953Z level=info msg="Executing migration" id="Add last_seen_at column to user" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.486541141Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.932908ms 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.492773759Z level=info msg="Executing migration" id="Add missing user data" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.493339657Z level=info msg="Migration successfully executed" id="Add missing user data" duration=565.749µs 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.500565675Z level=info msg="Executing migration" id="Add is_disabled column to user" 12:02:14 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 12:02:14 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 12:02:14 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 12:02:14 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 12:02:14 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 12:02:14 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 12:02:14 policy-apex-pdp | sasl.mechanism = GSSAPI 12:02:14 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 12:02:14 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 12:02:14 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 12:02:14 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 12:02:14 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 12:02:14 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 12:02:14 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 12:02:14 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 12:02:14 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 12:02:14 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 12:02:14 policy-apex-pdp | security.protocol = PLAINTEXT 12:02:14 policy-apex-pdp | security.providers = null 12:02:14 policy-apex-pdp | send.buffer.bytes = 131072 12:02:14 policy-apex-pdp | session.timeout.ms = 45000 12:02:14 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 12:02:14 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 12:02:14 policy-apex-pdp | ssl.cipher.suites = null 12:02:14 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 12:02:14 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 12:02:14 policy-apex-pdp | ssl.engine.factory.class = null 12:02:14 policy-apex-pdp | ssl.key.password = null 12:02:14 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 12:02:14 policy-apex-pdp | ssl.keystore.certificate.chain = null 12:02:14 policy-apex-pdp | ssl.keystore.key = null 12:02:14 policy-apex-pdp | ssl.keystore.location = null 12:02:14 policy-apex-pdp | ssl.keystore.password = null 12:02:14 policy-apex-pdp | ssl.keystore.type = JKS 12:02:14 policy-apex-pdp | ssl.protocol = TLSv1.3 12:02:14 policy-apex-pdp | ssl.provider = null 12:02:14 policy-apex-pdp | ssl.secure.random.implementation = null 12:02:14 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 12:02:14 policy-apex-pdp | ssl.truststore.certificates = null 12:02:14 policy-apex-pdp | ssl.truststore.location = null 12:02:14 policy-apex-pdp | ssl.truststore.password = null 12:02:14 policy-apex-pdp | ssl.truststore.type = JKS 12:02:14 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 12:02:14 policy-apex-pdp | 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.634+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.634+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.634+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706011214634 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.635+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Subscribed to topic(s): policy-pdp-pap 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.637+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c16be33e-df41-44df-94a4-99528a749fa0, alive=false, publisher=null]]: starting 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.664+00:00|INFO|ProducerConfig|main] ProducerConfig values: 12:02:14 policy-apex-pdp | acks = -1 12:02:14 policy-apex-pdp | auto.include.jmx.reporter = true 12:02:14 policy-apex-pdp | batch.size = 16384 12:02:14 policy-apex-pdp | bootstrap.servers = [kafka:9092] 12:02:14 policy-apex-pdp | buffer.memory = 33554432 12:02:14 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 12:02:14 policy-apex-pdp | client.id = producer-1 12:02:14 policy-apex-pdp | compression.type = none 12:02:14 policy-apex-pdp | connections.max.idle.ms = 540000 12:02:14 policy-apex-pdp | delivery.timeout.ms = 120000 12:02:14 policy-apex-pdp | enable.idempotence = true 12:02:14 policy-apex-pdp | interceptor.classes = [] 12:02:14 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 12:02:14 policy-apex-pdp | linger.ms = 0 12:02:14 policy-apex-pdp | max.block.ms = 60000 12:02:14 policy-apex-pdp | max.in.flight.requests.per.connection = 5 12:02:14 policy-apex-pdp | max.request.size = 1048576 12:02:14 policy-apex-pdp | metadata.max.age.ms = 300000 12:02:14 policy-apex-pdp | metadata.max.idle.ms = 300000 12:02:14 policy-apex-pdp | metric.reporters = [] 12:02:14 policy-apex-pdp | metrics.num.samples = 2 12:02:14 policy-apex-pdp | metrics.recording.level = INFO 12:02:14 policy-apex-pdp | metrics.sample.window.ms = 30000 12:02:14 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 12:02:14 policy-apex-pdp | partitioner.availability.timeout.ms = 0 12:02:14 policy-apex-pdp | partitioner.class = null 12:02:14 policy-apex-pdp | partitioner.ignore.keys = false 12:02:14 policy-apex-pdp | receive.buffer.bytes = 32768 12:02:14 policy-apex-pdp | reconnect.backoff.max.ms = 1000 12:02:14 policy-apex-pdp | reconnect.backoff.ms = 50 12:02:14 policy-apex-pdp | request.timeout.ms = 30000 12:02:14 policy-apex-pdp | retries = 2147483647 12:02:14 policy-apex-pdp | retry.backoff.ms = 100 12:02:14 policy-apex-pdp | sasl.client.callback.handler.class = null 12:02:14 policy-apex-pdp | sasl.jaas.config = null 12:02:14 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 12:02:14 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 12:02:14 policy-apex-pdp | sasl.kerberos.service.name = null 12:02:14 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 12:02:14 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 12:02:14 policy-apex-pdp | sasl.login.callback.handler.class = null 12:02:14 policy-apex-pdp | sasl.login.class = null 12:02:14 policy-apex-pdp | sasl.login.connect.timeout.ms = null 12:02:14 policy-apex-pdp | sasl.login.read.timeout.ms = null 12:02:14 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 12:02:14 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 12:02:14 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 12:02:14 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 12:02:14 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 12:02:14 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 12:02:14 policy-apex-pdp | sasl.mechanism = GSSAPI 12:02:14 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 12:02:14 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 12:02:14 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 12:02:14 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 12:02:14 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 12:02:14 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 12:02:14 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 12:02:14 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 12:02:14 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 12:02:14 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 12:02:14 policy-apex-pdp | security.protocol = PLAINTEXT 12:02:14 policy-apex-pdp | security.providers = null 12:02:14 policy-apex-pdp | send.buffer.bytes = 131072 12:02:14 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 12:02:14 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 12:02:14 policy-apex-pdp | ssl.cipher.suites = null 12:02:14 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 12:02:14 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 12:02:14 policy-apex-pdp | ssl.engine.factory.class = null 12:02:14 policy-apex-pdp | ssl.key.password = null 12:02:14 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 12:02:14 policy-apex-pdp | ssl.keystore.certificate.chain = null 12:02:14 policy-apex-pdp | ssl.keystore.key = null 12:02:14 policy-apex-pdp | ssl.keystore.location = null 12:02:14 policy-apex-pdp | ssl.keystore.password = null 12:02:14 policy-apex-pdp | ssl.keystore.type = JKS 12:02:14 policy-apex-pdp | ssl.protocol = TLSv1.3 12:02:14 policy-apex-pdp | ssl.provider = null 12:02:14 policy-apex-pdp | ssl.secure.random.implementation = null 12:02:14 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 12:02:14 policy-apex-pdp | ssl.truststore.certificates = null 12:02:14 policy-apex-pdp | ssl.truststore.location = null 12:02:14 policy-apex-pdp | ssl.truststore.password = null 12:02:14 policy-apex-pdp | ssl.truststore.type = JKS 12:02:14 policy-apex-pdp | transaction.timeout.ms = 60000 12:02:14 policy-apex-pdp | transactional.id = null 12:02:14 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 12:02:14 policy-apex-pdp | 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.709+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.744+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.745+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.745+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706011214744 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.747+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c16be33e-df41-44df-94a4-99528a749fa0, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.747+00:00|INFO|ServiceManager|main] service manager starting set alive 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.747+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.750+00:00|INFO|ServiceManager|main] service manager starting topic sinks 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.750+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.753+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.753+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.753+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.754+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5e219e28-7118-417e-b91d-edf2321c7473, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4ee37ca3 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.754+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5e219e28-7118-417e-b91d-edf2321c7473, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.754+00:00|INFO|ServiceManager|main] service manager starting Create REST server 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.783+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 12:02:14 policy-apex-pdp | [] 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.785+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 12:02:14 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c834911f-6dc0-4825-9b0e-296ed02f1e44","timestampMs":1706011214757,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup"} 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.975+00:00|INFO|ServiceManager|main] service manager starting Rest Server 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.976+00:00|INFO|ServiceManager|main] service manager starting 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.976+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.976+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@4628b1d3{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@77cf3f8b{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.988+00:00|INFO|ServiceManager|main] service manager started 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.989+00:00|INFO|ServiceManager|main] service manager started 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.989+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 12:02:14 policy-apex-pdp | [2024-01-23T12:00:14.989+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@4628b1d3{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@77cf3f8b{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 12:02:14 policy-apex-pdp | [2024-01-23T12:00:15.072+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: sXWmytVdQyKDGijCKdambA 12:02:14 policy-apex-pdp | [2024-01-23T12:00:15.074+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 12:02:14 policy-apex-pdp | [2024-01-23T12:00:15.078+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Cluster ID: sXWmytVdQyKDGijCKdambA 12:02:14 policy-apex-pdp | [2024-01-23T12:00:15.080+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 12:02:14 policy-apex-pdp | [2024-01-23T12:00:15.089+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] (Re-)joining group 12:02:14 policy-apex-pdp | [2024-01-23T12:00:15.110+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Request joining group due to: need to re-join with the given member-id: consumer-5e219e28-7118-417e-b91d-edf2321c7473-2-d01c9040-7f78-415c-8a67-4b73bfd12a93 12:02:14 policy-apex-pdp | [2024-01-23T12:00:15.110+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 12:02:14 policy-apex-pdp | [2024-01-23T12:00:15.110+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] (Re-)joining group 12:02:14 policy-apex-pdp | [2024-01-23T12:00:15.653+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 12:02:14 policy-apex-pdp | [2024-01-23T12:00:15.655+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 12:02:14 policy-apex-pdp | [2024-01-23T12:00:18.116+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Successfully joined group with generation Generation{generationId=1, memberId='consumer-5e219e28-7118-417e-b91d-edf2321c7473-2-d01c9040-7f78-415c-8a67-4b73bfd12a93', protocol='range'} 12:02:14 policy-apex-pdp | [2024-01-23T12:00:18.125+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Finished assignment for group at generation 1: {consumer-5e219e28-7118-417e-b91d-edf2321c7473-2-d01c9040-7f78-415c-8a67-4b73bfd12a93=Assignment(partitions=[policy-pdp-pap-0])} 12:02:14 policy-apex-pdp | [2024-01-23T12:00:18.137+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Successfully synced group in generation Generation{generationId=1, memberId='consumer-5e219e28-7118-417e-b91d-edf2321c7473-2-d01c9040-7f78-415c-8a67-4b73bfd12a93', protocol='range'} 12:02:14 policy-apex-pdp | [2024-01-23T12:00:18.137+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 12:02:14 policy-apex-pdp | [2024-01-23T12:00:18.139+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Adding newly assigned partitions: policy-pdp-pap-0 12:02:14 kafka | [2024-01-23 11:59:43,424] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:43,424] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:43,426] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@68be8808 (org.apache.zookeeper.ZooKeeper) 12:02:14 kafka | [2024-01-23 11:59:43,430] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 12:02:14 kafka | [2024-01-23 11:59:43,436] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 12:02:14 kafka | [2024-01-23 11:59:43,437] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 12:02:14 kafka | [2024-01-23 11:59:43,440] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) 12:02:14 kafka | [2024-01-23 11:59:43,450] INFO Socket connection established, initiating session, client: /172.17.0.9:56922, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) 12:02:14 kafka | [2024-01-23 11:59:43,460] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x10000043fa10001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 12:02:14 kafka | [2024-01-23 11:59:43,466] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 12:02:14 kafka | [2024-01-23 11:59:43,771] INFO Cluster ID = sXWmytVdQyKDGijCKdambA (kafka.server.KafkaServer) 12:02:14 kafka | [2024-01-23 11:59:43,773] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 12:02:14 kafka | [2024-01-23 11:59:43,819] INFO KafkaConfig values: 12:02:14 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 12:02:14 kafka | alter.config.policy.class.name = null 12:02:14 kafka | alter.log.dirs.replication.quota.window.num = 11 12:02:14 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 12:02:14 kafka | authorizer.class.name = 12:02:14 kafka | auto.create.topics.enable = true 12:02:14 kafka | auto.include.jmx.reporter = true 12:02:14 kafka | auto.leader.rebalance.enable = true 12:02:14 kafka | background.threads = 10 12:02:14 kafka | broker.heartbeat.interval.ms = 2000 12:02:14 kafka | broker.id = 1 12:02:14 kafka | broker.id.generation.enable = true 12:02:14 kafka | broker.rack = null 12:02:14 kafka | broker.session.timeout.ms = 9000 12:02:14 kafka | client.quota.callback.class = null 12:02:14 kafka | compression.type = producer 12:02:14 kafka | connection.failed.authentication.delay.ms = 100 12:02:14 kafka | connections.max.idle.ms = 600000 12:02:14 kafka | connections.max.reauth.ms = 0 12:02:14 kafka | control.plane.listener.name = null 12:02:14 kafka | controlled.shutdown.enable = true 12:02:14 kafka | controlled.shutdown.max.retries = 3 12:02:14 kafka | controlled.shutdown.retry.backoff.ms = 5000 12:02:14 kafka | controller.listener.names = null 12:02:14 policy-apex-pdp | [2024-01-23T12:00:18.148+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Found no committed offset for partition policy-pdp-pap-0 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,912] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.502545656Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.979271ms 12:02:14 kafka | controller.quorum.append.linger.ms = 25 12:02:14 kafka | controller.quorum.election.backoff.max.ms = 1000 12:02:14 policy-apex-pdp | [2024-01-23T12:00:18.161+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 12:02:14 policy-pap | Waiting for mariadb port 3306... 12:02:14 prometheus | ts=2024-01-23T11:59:35.248Z caller=main.go:544 level=info msg="No time or size retention was set so using the default time retention" duration=15d 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,913] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.508858747Z level=info msg="Executing migration" id="Add index user.login/user.email" 12:02:14 policy-db-migrator | Waiting for mariadb port 3306... 12:02:14 kafka | controller.quorum.election.timeout.ms = 1000 12:02:14 policy-apex-pdp | [2024-01-23T12:00:34.753+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 12:02:14 policy-pap | mariadb (172.17.0.3:3306) open 12:02:14 prometheus | ts=2024-01-23T11:59:35.248Z caller=main.go:588 level=info msg="Starting Prometheus Server" mode=server version="(version=2.49.1, branch=HEAD, revision=43e14844a33b65e2a396e3944272af8b3a494071)" 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,913] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.509844788Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=986.56µs 12:02:14 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 12:02:14 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 12:02:14 kafka | controller.quorum.fetch.timeout.ms = 2000 12:02:14 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"467b6bf8-582b-4dbd-92b4-9e245489db39","timestampMs":1706011234753,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup"} 12:02:14 policy-pap | Waiting for kafka port 9092... 12:02:14 prometheus | ts=2024-01-23T11:59:35.248Z caller=main.go:593 level=info build_context="(go=go1.21.6, platform=linux/amd64, user=root@6d5f4c649d25, date=20240115-16:58:43, tags=netgo,builtinassets,stringlabels)" 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,913] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.515147397Z level=info msg="Executing migration" id="Add is_service_account column to user" 12:02:14 simulator | overriding logback.xml 12:02:14 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 12:02:14 kafka | controller.quorum.request.timeout.ms = 2000 12:02:14 policy-apex-pdp | [2024-01-23T12:00:34.781+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:02:14 policy-pap | kafka (172.17.0.9:9092) open 12:02:14 prometheus | ts=2024-01-23T11:59:35.248Z caller=main.go:594 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.516396131Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.249664ms 12:02:14 simulator | 2024-01-23 11:59:36,739 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 12:02:14 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 12:02:14 kafka | controller.quorum.retry.backoff.ms = 20 12:02:14 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"467b6bf8-582b-4dbd-92b4-9e245489db39","timestampMs":1706011234753,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup"} 12:02:14 policy-pap | Waiting for api port 6969... 12:02:14 prometheus | ts=2024-01-23T11:59:35.248Z caller=main.go:595 level=info fd_limits="(soft=1048576, hard=1048576)" 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,913] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.522538674Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 12:02:14 simulator | 2024-01-23 11:59:36,817 INFO org.onap.policy.models.simulators starting 12:02:14 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 12:02:14 kafka | controller.quorum.voters = [] 12:02:14 policy-apex-pdp | [2024-01-23T12:00:34.785+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 12:02:14 policy-pap | api (172.17.0.7:6969) open 12:02:14 prometheus | ts=2024-01-23T11:59:35.248Z caller=main.go:596 level=info vm_limits="(soft=unlimited, hard=unlimited)" 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,913] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,913] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 simulator | 2024-01-23 11:59:36,817 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 12:02:14 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 12:02:14 kafka | controller.quota.window.num = 11 12:02:14 policy-apex-pdp | [2024-01-23T12:00:34.932+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:02:14 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 12:02:14 prometheus | ts=2024-01-23T11:59:35.250Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,913] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,913] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 simulator | 2024-01-23 11:59:37,036 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 12:02:14 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 12:02:14 kafka | controller.quota.window.size.seconds = 1 12:02:14 policy-apex-pdp | {"source":"pap-c9cd1c7c-2e58-4937-84b6-2c31f25c757e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c","timestampMs":1706011234863,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:02:14 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 12:02:14 prometheus | ts=2024-01-23T11:59:35.251Z caller=main.go:1039 level=info msg="Starting TSDB ..." 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,913] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,913] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 simulator | 2024-01-23 11:59:37,037 INFO org.onap.policy.models.simulators starting A&AI simulator 12:02:14 kafka | controller.socket.timeout.ms = 30000 12:02:14 policy-apex-pdp | [2024-01-23T12:00:34.944+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 12:02:14 policy-pap | 12:02:14 prometheus | ts=2024-01-23T11:59:35.257Z caller=tls_config.go:274 level=info component=web msg="Listening on" address=[::]:9090 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,913] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,913] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 simulator | 2024-01-23 11:59:37,164 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 12:02:14 policy-db-migrator | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! 12:02:14 kafka | create.topic.policy.class.name = null 12:02:14 policy-apex-pdp | [2024-01-23T12:00:34.944+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 12:02:14 policy-pap | . ____ _ __ _ _ 12:02:14 prometheus | ts=2024-01-23T11:59:35.257Z caller=tls_config.go:277 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,913] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,913] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 simulator | 2024-01-23 11:59:37,176 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 12:02:14 policy-db-migrator | 321 blocks 12:02:14 policy-db-migrator | Preparing upgrade release version: 0800 12:02:14 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d2388848-0012-45a5-abaf-541938745a99","timestampMs":1706011234943,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup"} 12:02:14 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 12:02:14 prometheus | ts=2024-01-23T11:59:35.259Z caller=head.go:606 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,913] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,913] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 simulator | 2024-01-23 11:59:37,182 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 12:02:14 kafka | default.replication.factor = 1 12:02:14 policy-db-migrator | Preparing upgrade release version: 0900 12:02:14 policy-apex-pdp | [2024-01-23T12:00:34.945+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 12:02:14 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 12:02:14 prometheus | ts=2024-01-23T11:59:35.259Z caller=head.go:687 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.27µs 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,913] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.532387745Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=9.848831ms 12:02:14 simulator | 2024-01-23 11:59:37,188 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 12:02:14 kafka | delegation.token.expiry.check.interval.ms = 3600000 12:02:14 policy-db-migrator | Preparing upgrade release version: 1000 12:02:14 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f2f3d3ad-4c80-4136-9424-630cff59eb41","timestampMs":1706011234944,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:02:14 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 12:02:14 prometheus | ts=2024-01-23T11:59:35.259Z caller=head.go:695 level=info component=tsdb msg="Replaying WAL, this may take a while" 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,913] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,913] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 simulator | 2024-01-23 11:59:37,256 INFO Session workerName=node0 12:02:14 kafka | delegation.token.expiry.time.ms = 86400000 12:02:14 policy-db-migrator | Preparing upgrade release version: 1100 12:02:14 policy-apex-pdp | [2024-01-23T12:00:34.959+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:02:14 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 12:02:14 prometheus | ts=2024-01-23T11:59:35.260Z caller=head.go:766 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,914] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,915] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 simulator | 2024-01-23 11:59:37,741 INFO Using GSON for REST calls 12:02:14 kafka | delegation.token.master.key = null 12:02:14 policy-db-migrator | Preparing upgrade release version: 1200 12:02:14 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d2388848-0012-45a5-abaf-541938745a99","timestampMs":1706011234943,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup"} 12:02:14 policy-pap | =========|_|==============|___/=/_/_/_/ 12:02:14 prometheus | ts=2024-01-23T11:59:35.260Z caller=head.go:803 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=28.441µs wal_replay_duration=375.77µs wbl_replay_duration=170ns total_replay_duration=437.353µs 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,915] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,916] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 12:02:14 simulator | 2024-01-23 11:59:37,830 INFO Started o.e.j.s.ServletContextHandler@57fd91c9{/,null,AVAILABLE} 12:02:14 kafka | delegation.token.max.lifetime.ms = 604800000 12:02:14 policy-db-migrator | Preparing upgrade release version: 1300 12:02:14 policy-apex-pdp | [2024-01-23T12:00:34.960+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 12:02:14 policy-pap | :: Spring Boot :: (v3.1.7) 12:02:14 prometheus | ts=2024-01-23T11:59:35.262Z caller=main.go:1060 level=info fs_type=EXT4_SUPER_MAGIC 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,916] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.538937878Z level=info msg="Executing migration" id="create temp user table v1-7" 12:02:14 simulator | 2024-01-23 11:59:37,838 INFO Started A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 12:02:14 kafka | delegation.token.secret.key = null 12:02:14 policy-db-migrator | Done 12:02:14 policy-apex-pdp | [2024-01-23T12:00:34.960+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:02:14 policy-pap | 12:02:14 prometheus | ts=2024-01-23T11:59:35.262Z caller=main.go:1063 level=info msg="TSDB started" 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,918] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,918] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 12:02:14 simulator | 2024-01-23 11:59:37,845 INFO Started Server@16746061{STARTING}[11.0.18,sto=0] @1577ms 12:02:14 kafka | delete.records.purgatory.purge.interval.requests = 1 12:02:14 policy-db-migrator | name version 12:02:14 policy-pap | [2024-01-23T12:00:03.464+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.9 with PID 34 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 12:02:14 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f2f3d3ad-4c80-4136-9424-630cff59eb41","timestampMs":1706011234944,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:02:14 prometheus | ts=2024-01-23T11:59:35.262Z caller=main.go:1245 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,919] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,919] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 12:02:14 simulator | 2024-01-23 11:59:37,845 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,AVAILABLE}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4337 ms. 12:02:14 kafka | delete.topic.enable = true 12:02:14 policy-db-migrator | policyadmin 0 12:02:14 policy-pap | [2024-01-23T12:00:03.465+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 12:02:14 policy-apex-pdp | [2024-01-23T12:00:34.961+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 12:02:14 prometheus | ts=2024-01-23T11:59:35.265Z caller=main.go:1282 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=3.012099ms db_storage=1.62µs remote_storage=1.91µs web_handler=810ns query_engine=991ns scrape=237.172µs scrape_sd=137.567µs notify=33.812µs notify_sd=13.841µs rules=1.8µs tracing=5.36µs 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,919] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,919] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 12:02:14 simulator | 2024-01-23 11:59:37,850 INFO org.onap.policy.models.simulators starting SDNC simulator 12:02:14 kafka | early.start.listeners = null 12:02:14 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 12:02:14 policy-pap | [2024-01-23T12:00:05.335+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 12:02:14 policy-apex-pdp | [2024-01-23T12:00:35.005+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:02:14 prometheus | ts=2024-01-23T11:59:35.265Z caller=main.go:1024 level=info msg="Server is ready to receive web requests." 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,923] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.540395943Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.458254ms 12:02:14 simulator | 2024-01-23 11:59:37,853 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 12:02:14 kafka | fetch.max.bytes = 57671680 12:02:14 policy-db-migrator | upgrade: 0 -> 1300 12:02:14 policy-pap | [2024-01-23T12:00:05.454+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 108 ms. Found 7 JPA repository interfaces. 12:02:14 policy-apex-pdp | {"source":"pap-c9cd1c7c-2e58-4937-84b6-2c31f25c757e","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d5747120-881d-4c2e-9c54-68eb2a8c3ec9","timestampMs":1706011234863,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:02:14 prometheus | ts=2024-01-23T11:59:35.266Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.546984248Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.547822081Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=837.752µs 12:02:14 simulator | 2024-01-23 11:59:37,854 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 12:02:14 kafka | fetch.purgatory.purge.interval.requests = 1000 12:02:14 policy-db-migrator | 12:02:14 policy-pap | [2024-01-23T12:00:05.852+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 12:02:14 policy-apex-pdp | [2024-01-23T12:00:35.008+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.555458429Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.556763696Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.298696ms 12:02:14 simulator | 2024-01-23 11:59:37,855 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 12:02:14 kafka | group.consumer.assignors = [] 12:02:14 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 12:02:14 policy-pap | [2024-01-23T12:00:05.853+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 12:02:14 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"d5747120-881d-4c2e-9c54-68eb2a8c3ec9","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"9ca47fc4-4a5a-4269-ac4b-8ea5170943ca","timestampMs":1706011235008,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.565505621Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,923] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 simulator | 2024-01-23 11:59:37,856 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 12:02:14 kafka | group.consumer.heartbeat.interval.ms = 5000 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | [2024-01-23T12:00:06.508+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 12:02:14 policy-apex-pdp | [2024-01-23T12:00:35.016+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.566792036Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.284745ms 12:02:14 simulator | 2024-01-23 11:59:37,871 INFO Session workerName=node0 12:02:14 kafka | group.consumer.max.heartbeat.interval.ms = 15000 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,924] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 12:02:14 policy-pap | [2024-01-23T12:00:06.518+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 12:02:14 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"d5747120-881d-4c2e-9c54-68eb2a8c3ec9","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"9ca47fc4-4a5a-4269-ac4b-8ea5170943ca","timestampMs":1706011235008,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.572614633Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 12:02:14 simulator | 2024-01-23 11:59:37,941 INFO Using GSON for REST calls 12:02:14 kafka | group.consumer.max.session.timeout.ms = 60000 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,924] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | [2024-01-23T12:00:06.520+00:00|INFO|StandardService|main] Starting service [Tomcat] 12:02:14 policy-apex-pdp | [2024-01-23T12:00:35.016+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.573490447Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=875.345µs 12:02:14 simulator | 2024-01-23 11:59:37,951 INFO Started o.e.j.s.ServletContextHandler@183e8023{/,null,AVAILABLE} 12:02:14 kafka | group.consumer.max.size = 2147483647 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,925] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 policy-db-migrator | 12:02:14 policy-pap | [2024-01-23T12:00:06.520+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 12:02:14 policy-apex-pdp | [2024-01-23T12:00:35.052+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.577905762Z level=info msg="Executing migration" id="Update temp_user table charset" 12:02:14 simulator | 2024-01-23 11:59:37,953 INFO Started SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 12:02:14 kafka | group.consumer.min.heartbeat.interval.ms = 5000 12:02:14 zookeeper_1 | [2024-01-23 11:59:40,954] INFO Logging initialized @573ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 12:02:14 policy-db-migrator | 12:02:14 policy-pap | [2024-01-23T12:00:06.607+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 12:02:14 policy-apex-pdp | {"source":"pap-c9cd1c7c-2e58-4937-84b6-2c31f25c757e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"63a76968-fae6-4e69-9528-57bfc1bb20a8","timestampMs":1706011235028,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.578003037Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=98.525µs 12:02:14 simulator | 2024-01-23 11:59:37,953 INFO Started Server@75459c75{STARTING}[11.0.18,sto=0] @1685ms 12:02:14 kafka | group.consumer.min.session.timeout.ms = 45000 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,054] WARN o.e.j.s.ServletContextHandler@45385f75{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 12:02:14 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 12:02:14 policy-pap | [2024-01-23T12:00:06.607+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3063 ms 12:02:14 policy-apex-pdp | [2024-01-23T12:00:35.054+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.584609683Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 12:02:14 simulator | 2024-01-23 11:59:37,953 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,AVAILABLE}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4902 ms. 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,054] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | [2024-01-23T12:00:07.036+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 12:02:14 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"63a76968-fae6-4e69-9528-57bfc1bb20a8","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"d7b3019b-93eb-43bb-bab7-dfe71e0e46ae","timestampMs":1706011235053,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.586003104Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.38467ms 12:02:14 simulator | 2024-01-23 11:59:37,955 INFO org.onap.policy.models.simulators starting SO simulator 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,071] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 12:02:14 policy-pap | [2024-01-23T12:00:07.122+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 12:02:14 policy-apex-pdp | [2024-01-23T12:00:35.062+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.592004839Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 12:02:14 simulator | 2024-01-23 11:59:37,964 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,106] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | [2024-01-23T12:00:07.125+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 12:02:14 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"63a76968-fae6-4e69-9528-57bfc1bb20a8","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"d7b3019b-93eb-43bb-bab7-dfe71e0e46ae","timestampMs":1706011235053,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.59279354Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=788.951µs 12:02:14 simulator | 2024-01-23 11:59:37,965 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,107] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 12:02:14 policy-db-migrator | 12:02:14 policy-pap | [2024-01-23T12:00:07.173+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 12:02:14 policy-apex-pdp | [2024-01-23T12:00:35.062+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.599464859Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 12:02:14 kafka | group.consumer.session.timeout.ms = 45000 12:02:14 simulator | 2024-01-23 11:59:37,966 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,108] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) 12:02:14 policy-db-migrator | 12:02:14 policy-pap | [2024-01-23T12:00:07.532+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 12:02:14 policy-apex-pdp | [2024-01-23T12:00:56.156+00:00|INFO|RequestLog|qtp830863979-33] 172.17.0.5 - policyadmin [23/Jan/2024:12:00:56 +0000] "GET /metrics HTTP/1.1" 200 10647 "-" "Prometheus/2.49.1" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.600625058Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.160339ms 12:02:14 kafka | group.coordinator.new.enable = false 12:02:14 simulator | 2024-01-23 11:59:37,967 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,111] WARN ServletContext@o.e.j.s.ServletContextHandler@45385f75{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 12:02:14 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 12:02:14 policy-pap | [2024-01-23T12:00:07.552+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 12:02:14 policy-apex-pdp | [2024-01-23T12:01:56.079+00:00|INFO|RequestLog|qtp830863979-28] 172.17.0.5 - policyadmin [23/Jan/2024:12:01:56 +0000] "GET /metrics HTTP/1.1" 200 10651 "-" "Prometheus/2.49.1" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.605581621Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 12:02:14 kafka | group.coordinator.threads = 1 12:02:14 simulator | 2024-01-23 11:59:37,971 INFO Session workerName=node0 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,118] INFO Started o.e.j.s.ServletContextHandler@45385f75{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | [2024-01-23T12:00:07.663+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@4068102e 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.606771061Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.187191ms 12:02:14 kafka | group.initial.rebalance.delay.ms = 3000 12:02:14 simulator | 2024-01-23 11:59:38,027 INFO Using GSON for REST calls 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,131] INFO Started ServerConnector@304bb45b{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 12:02:14 policy-pap | [2024-01-23T12:00:07.665+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.611627958Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 12:02:14 kafka | group.max.session.timeout.ms = 1800000 12:02:14 simulator | 2024-01-23 11:59:38,046 INFO Started o.e.j.s.ServletContextHandler@2a3c96e3{/,null,AVAILABLE} 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,131] INFO Started @750ms (org.eclipse.jetty.server.Server) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | [2024-01-23T12:00:07.695+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.616434423Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=4.807435ms 12:02:14 kafka | group.max.size = 2147483647 12:02:14 simulator | 2024-01-23 11:59:38,047 INFO Started SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,131] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.620949363Z level=info msg="Executing migration" id="create temp_user v2" 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,136] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 12:02:14 policy-db-migrator | 12:02:14 policy-pap | [2024-01-23T12:00:07.696+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead 12:02:14 simulator | 2024-01-23 11:59:38,047 INFO Started Server@30bcf3c1{STARTING}[11.0.18,sto=0] @1780ms 12:02:14 kafka | group.min.session.timeout.ms = 6000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.621829628Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=880.124µs 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,137] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 12:02:14 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 12:02:14 policy-pap | [2024-01-23T12:00:09.591+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 12:02:14 simulator | 2024-01-23 11:59:38,047 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,AVAILABLE}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4919 ms. 12:02:14 kafka | initial.broker.registration.timeout.ms = 60000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.625872573Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,139] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | [2024-01-23T12:00:09.594+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 12:02:14 simulator | 2024-01-23 11:59:38,049 INFO org.onap.policy.models.simulators starting VFC simulator 12:02:14 kafka | inter.broker.listener.name = PLAINTEXT 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.626706606Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=833.603µs 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,140] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 12:02:14 policy-pap | [2024-01-23T12:00:10.153+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 12:02:14 simulator | 2024-01-23 11:59:38,056 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.630819415Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.631800745Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=981.39µs 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,152] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | [2024-01-23T12:00:10.703+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 12:02:14 simulator | 2024-01-23 11:59:38,057 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.635577237Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.636430621Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=853.194µs 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,152] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 12:02:14 policy-pap | [2024-01-23T12:00:10.812+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 12:02:14 simulator | 2024-01-23 11:59:38,058 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.643551203Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.645098832Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.540578ms 12:02:14 policy-db-migrator | 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,153] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 12:02:14 policy-pap | [2024-01-23T12:00:11.073+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 12:02:14 simulator | 2024-01-23 11:59:38,059 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.650440984Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.650899277Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=458.083µs 12:02:14 policy-db-migrator | 12:02:14 policy-pap | allow.auto.create.topics = true 12:02:14 simulator | 2024-01-23 11:59:38,076 INFO Session workerName=node0 12:02:14 kafka | inter.broker.protocol.version = 3.5-IV2 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.653882459Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,153] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 12:02:14 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 12:02:14 policy-pap | auto.commit.interval.ms = 5000 12:02:14 simulator | 2024-01-23 11:59:38,145 INFO Using GSON for REST calls 12:02:14 kafka | kafka.metrics.polling.interval.secs = 10 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.654704961Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=821.892µs 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,157] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | auto.include.jmx.reporter = true 12:02:14 simulator | 2024-01-23 11:59:38,153 INFO Started o.e.j.s.ServletContextHandler@792bbc74{/,null,AVAILABLE} 12:02:14 kafka | kafka.metrics.reporters = [] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.662173901Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,157] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 12:02:14 policy-pap | auto.offset.reset = latest 12:02:14 simulator | 2024-01-23 11:59:38,154 INFO Started VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 12:02:14 kafka | leader.imbalance.check.interval.seconds = 300 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.662516499Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=342.757µs 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,160] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | bootstrap.servers = [kafka:9092] 12:02:14 simulator | 2024-01-23 11:59:38,154 INFO Started Server@a776e{STARTING}[11.0.18,sto=0] @1887ms 12:02:14 kafka | leader.imbalance.per.broker.percentage = 10 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.675269598Z level=info msg="Executing migration" id="create star table" 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,161] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 12:02:14 policy-db-migrator | 12:02:14 policy-pap | check.crcs = true 12:02:14 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 12:02:14 simulator | 2024-01-23 11:59:38,154 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,AVAILABLE}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4903 ms. 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.676009095Z level=info msg="Migration successfully executed" id="create star table" duration=773.729µs 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,161] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 12:02:14 policy-db-migrator | 12:02:14 policy-pap | client.dns.lookup = use_all_dns_ips 12:02:14 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 12:02:14 simulator | 2024-01-23 11:59:38,156 INFO org.onap.policy.models.simulators started 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.69162302Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,169] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 12:02:14 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 12:02:14 policy-pap | client.id = consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-1 12:02:14 kafka | log.cleaner.backoff.ms = 15000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.692470773Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=849.393µs 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,169] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | client.rack = 12:02:14 kafka | log.cleaner.dedupe.buffer.size = 134217728 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.700310792Z level=info msg="Executing migration" id="create org table v1" 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,193] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 12:02:14 policy-pap | connections.max.idle.ms = 540000 12:02:14 kafka | log.cleaner.delete.retention.ms = 86400000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.701020508Z level=info msg="Migration successfully executed" id="create org table v1" duration=709.986µs 12:02:14 zookeeper_1 | [2024-01-23 11:59:41,194] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | default.api.timeout.ms = 60000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.707219524Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 12:02:14 policy-db-migrator | 12:02:14 policy-pap | enable.auto.commit = true 12:02:14 kafka | log.cleaner.enable = true 12:02:14 zookeeper_1 | [2024-01-23 11:59:42,149] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 12:02:14 policy-db-migrator | 12:02:14 policy-pap | exclude.internal.topics = true 12:02:14 kafka | log.cleaner.io.buffer.load.factor = 0.9 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.707907759Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=688.035µs 12:02:14 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 12:02:14 policy-pap | fetch.max.bytes = 52428800 12:02:14 kafka | log.cleaner.io.buffer.size = 524288 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.713266702Z level=info msg="Executing migration" id="create org_user table v1" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | fetch.max.wait.ms = 500 12:02:14 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 12:02:14 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 12:02:14 policy-pap | fetch.min.bytes = 1 12:02:14 kafka | log.cleaner.min.cleanable.ratio = 0.5 12:02:14 kafka | log.cleaner.min.compaction.lag.ms = 0 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | group.id = 7faaa365-1216-4c85-9c2d-e9bca189fc3d 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.714238481Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=998.071µs 12:02:14 kafka | log.cleaner.threads = 1 12:02:14 policy-db-migrator | 12:02:14 policy-pap | group.instance.id = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.724546316Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 12:02:14 kafka | log.cleanup.policy = [delete] 12:02:14 policy-db-migrator | 12:02:14 policy-pap | heartbeat.interval.ms = 3000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.726436022Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.888006ms 12:02:14 kafka | log.dir = /tmp/kafka-logs 12:02:14 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 12:02:14 policy-pap | interceptor.classes = [] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.735292273Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 12:02:14 kafka | log.dirs = /var/lib/kafka/data 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | internal.leave.group.on.close = true 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.736146216Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=854.253µs 12:02:14 kafka | log.flush.interval.messages = 9223372036854775807 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 12:02:14 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.749888826Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 12:02:14 kafka | log.flush.interval.ms = null 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | isolation.level = read_uncommitted 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.751454145Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.565209ms 12:02:14 kafka | log.flush.offset.checkpoint.interval.ms = 60000 12:02:14 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.756468401Z level=info msg="Executing migration" id="Update org table charset" 12:02:14 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 12:02:14 policy-db-migrator | 12:02:14 policy-pap | max.partition.fetch.bytes = 1048576 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.756509693Z level=info msg="Migration successfully executed" id="Update org table charset" duration=42.982µs 12:02:14 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 12:02:14 policy-db-migrator | 12:02:14 policy-pap | max.poll.interval.ms = 300000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.765519471Z level=info msg="Executing migration" id="Update org_user table charset" 12:02:14 kafka | log.index.interval.bytes = 4096 12:02:14 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 12:02:14 policy-pap | max.poll.records = 500 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.765570814Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=55.103µs 12:02:14 kafka | log.index.size.max.bytes = 10485760 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | metadata.max.age.ms = 300000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.77590519Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 12:02:14 kafka | log.message.downconversion.enable = true 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 12:02:14 policy-pap | metric.reporters = [] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.776275389Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=382.96µs 12:02:14 kafka | log.message.format.version = 3.0-IV1 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | metrics.num.samples = 2 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.781737157Z level=info msg="Executing migration" id="create dashboard table" 12:02:14 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 12:02:14 policy-pap | metrics.recording.level = INFO 12:02:14 kafka | log.message.timestamp.type = CreateTime 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.782954049Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.214541ms 12:02:14 policy-pap | metrics.sample.window.ms = 30000 12:02:14 kafka | log.preallocate = false 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.825516735Z level=info msg="Executing migration" id="add index dashboard.account_id" 12:02:14 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 12:02:14 kafka | log.retention.bytes = -1 12:02:14 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.827357009Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.846424ms 12:02:14 kafka | log.retention.check.interval.ms = 300000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.833574485Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 12:02:14 policy-pap | receive.buffer.bytes = 65536 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | log.retention.hours = 168 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.835184747Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.609482ms 12:02:14 policy-pap | reconnect.backoff.max.ms = 1000 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.839015242Z level=info msg="Executing migration" id="create dashboard_tag table" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | log.retention.minutes = null 12:02:14 policy-pap | reconnect.backoff.ms = 50 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.839486116Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=470.914µs 12:02:14 policy-db-migrator | 12:02:14 kafka | log.retention.ms = null 12:02:14 policy-pap | request.timeout.ms = 30000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.845091282Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 12:02:14 policy-db-migrator | 12:02:14 kafka | log.roll.hours = 168 12:02:14 policy-pap | retry.backoff.ms = 100 12:02:14 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 12:02:14 policy-pap | sasl.client.callback.handler.class = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.84682983Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.737739ms 12:02:14 kafka | log.roll.jitter.hours = 0 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | sasl.jaas.config = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.851298737Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 12:02:14 kafka | log.roll.jitter.ms = null 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 12:02:14 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.852158921Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=860.084µs 12:02:14 kafka | log.roll.ms = null 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.857621149Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 12:02:14 kafka | log.segment.bytes = 1073741824 12:02:14 policy-db-migrator | 12:02:14 policy-pap | sasl.kerberos.service.name = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.866150173Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=8.516814ms 12:02:14 kafka | log.segment.delete.delay.ms = 60000 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.879139715Z level=info msg="Executing migration" id="create dashboard v2" 12:02:14 kafka | max.connection.creation.rate = 2147483647 12:02:14 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.880192798Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=1.044714ms 12:02:14 kafka | max.connections = 2147483647 12:02:14 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 12:02:14 kafka | max.connections.per.ip = 2147483647 12:02:14 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.888401696Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 12:02:14 policy-pap | sasl.login.callback.handler.class = null 12:02:14 kafka | max.connections.per.ip.overrides = 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.890535845Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=2.108947ms 12:02:14 policy-pap | sasl.login.class = null 12:02:14 kafka | max.incremental.fetch.session.cache.slots = 1000 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.897561182Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 12:02:14 policy-pap | sasl.login.connect.timeout.ms = null 12:02:14 kafka | message.max.bytes = 1048588 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.898470019Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=912.086µs 12:02:14 kafka | metadata.log.dir = null 12:02:14 policy-db-migrator | 12:02:14 policy-pap | sasl.login.read.timeout.ms = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.900976136Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 12:02:14 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 12:02:14 policy-db-migrator | 12:02:14 policy-pap | sasl.login.refresh.buffer.seconds = 300 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.901352305Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=376.079µs 12:02:14 kafka | metadata.log.max.snapshot.interval.ms = 3600000 12:02:14 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 12:02:14 policy-pap | sasl.login.refresh.min.period.seconds = 60 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.907229324Z level=info msg="Executing migration" id="drop table dashboard_v1" 12:02:14 kafka | metadata.log.segment.bytes = 1073741824 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | sasl.login.refresh.window.factor = 0.8 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.908447266Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.217722ms 12:02:14 kafka | metadata.log.segment.min.bytes = 8388608 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 12:02:14 policy-pap | sasl.login.refresh.window.jitter = 0.05 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.913506404Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 12:02:14 kafka | metadata.log.segment.ms = 604800000 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | sasl.login.retry.backoff.max.ms = 10000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.913689733Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=183.539µs 12:02:14 kafka | metadata.max.idle.interval.ms = 500 12:02:14 policy-db-migrator | 12:02:14 policy-pap | sasl.login.retry.backoff.ms = 100 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.919757972Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 12:02:14 policy-db-migrator | 12:02:14 policy-pap | sasl.mechanism = GSSAPI 12:02:14 kafka | metadata.max.retention.bytes = 104857600 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.922814808Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=3.067167ms 12:02:14 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 12:02:14 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 12:02:14 kafka | metadata.max.retention.ms = 604800000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.928127598Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | sasl.oauthbearer.expected.audience = null 12:02:14 kafka | metric.reporters = [] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.930167192Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=2.038774ms 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 12:02:14 policy-pap | sasl.oauthbearer.expected.issuer = null 12:02:14 kafka | metrics.num.samples = 2 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.93425748Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 12:02:14 kafka | metrics.recording.level = INFO 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.936106564Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.849164ms 12:02:14 policy-db-migrator | 12:02:14 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 12:02:14 kafka | metrics.sample.window.ms = 30000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.946296483Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 12:02:14 policy-db-migrator | 12:02:14 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 12:02:14 kafka | min.insync.replicas = 1 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.94801705Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.720147ms 12:02:14 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 12:02:14 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 12:02:14 kafka | node.id = 1 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.953400134Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | sasl.oauthbearer.scope.claim.name = scope 12:02:14 kafka | num.io.threads = 8 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.956289141Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.889387ms 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 12:02:14 policy-pap | sasl.oauthbearer.sub.claim.name = sub 12:02:14 kafka | num.network.threads = 3 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.962116068Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | sasl.oauthbearer.token.endpoint.url = null 12:02:14 kafka | num.partitions = 1 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.963039165Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=921.167µs 12:02:14 policy-db-migrator | 12:02:14 policy-pap | security.protocol = PLAINTEXT 12:02:14 kafka | num.recovery.threads.per.data.dir = 1 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.970371768Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 12:02:14 policy-db-migrator | 12:02:14 policy-pap | security.providers = null 12:02:14 kafka | num.replica.alter.log.dirs.threads = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.971825732Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.454994ms 12:02:14 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 12:02:14 policy-pap | send.buffer.bytes = 131072 12:02:14 kafka | num.replica.fetchers = 1 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.97767401Z level=info msg="Executing migration" id="Update dashboard table charset" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | session.timeout.ms = 45000 12:02:14 kafka | offset.metadata.max.bytes = 4096 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.977760824Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=87.114µs 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 12:02:14 policy-pap | socket.connection.setup.timeout.max.ms = 30000 12:02:14 kafka | offsets.commit.required.acks = -1 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.98100935Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | socket.connection.setup.timeout.ms = 10000 12:02:14 kafka | offsets.commit.timeout.ms = 5000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.981036251Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=28.071µs 12:02:14 policy-db-migrator | 12:02:14 policy-pap | ssl.cipher.suites = null 12:02:14 kafka | offsets.load.buffer.size = 5242880 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.984232744Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 12:02:14 policy-db-migrator | 12:02:14 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 12:02:14 kafka | offsets.retention.check.interval.ms = 600000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.986947902Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.714178ms 12:02:14 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 12:02:14 policy-pap | ssl.endpoint.identification.algorithm = https 12:02:14 kafka | offsets.retention.minutes = 10080 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.99181115Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | ssl.engine.factory.class = null 12:02:14 kafka | offsets.topic.compression.codec = 0 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.993861564Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.050195ms 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 12:02:14 policy-pap | ssl.key.password = null 12:02:14 kafka | offsets.topic.num.partitions = 50 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:39.996886578Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | ssl.keymanager.algorithm = SunX509 12:02:14 kafka | offsets.topic.replication.factor = 1 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.00086244Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=3.977872ms 12:02:14 policy-db-migrator | 12:02:14 policy-pap | ssl.keystore.certificate.chain = null 12:02:14 kafka | offsets.topic.segment.bytes = 104857600 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.004332677Z level=info msg="Executing migration" id="Add column uid in dashboard" 12:02:14 policy-db-migrator | 12:02:14 policy-pap | ssl.keystore.key = null 12:02:14 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.007378491Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=3.045274ms 12:02:14 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 12:02:14 policy-pap | ssl.keystore.location = null 12:02:14 kafka | password.encoder.iterations = 4096 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.01052832Z level=info msg="Executing migration" id="Update uid column values in dashboard" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | ssl.keystore.password = null 12:02:14 kafka | password.encoder.key.length = 128 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.010736211Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=207.89µs 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 12:02:14 policy-pap | ssl.keystore.type = JKS 12:02:14 kafka | password.encoder.keyfactory.algorithm = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.014460609Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | ssl.protocol = TLSv1.3 12:02:14 kafka | password.encoder.old.secret = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.015337053Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=876.154µs 12:02:14 policy-db-migrator | 12:02:14 policy-pap | ssl.provider = null 12:02:14 kafka | password.encoder.secret = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.018651491Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 12:02:14 policy-db-migrator | 12:02:14 policy-pap | ssl.secure.random.implementation = null 12:02:14 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.019791359Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.139377ms 12:02:14 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 12:02:14 policy-pap | ssl.trustmanager.algorithm = PKIX 12:02:14 kafka | process.roles = [] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.023317737Z level=info msg="Executing migration" id="Update dashboard title length" 12:02:14 policy-pap | ssl.truststore.certificates = null 12:02:14 kafka | producer.id.expiration.check.interval.ms = 600000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.023357129Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=40.522µs 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | ssl.truststore.location = null 12:02:14 kafka | producer.id.expiration.ms = 86400000 12:02:14 kafka | producer.purgatory.purge.interval.requests = 1000 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 12:02:14 policy-pap | ssl.truststore.password = null 12:02:14 kafka | queued.max.request.bytes = -1 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.02695037Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | ssl.truststore.type = JKS 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.028364102Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.416932ms 12:02:14 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 12:02:14 kafka | queued.max.requests = 500 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.031562183Z level=info msg="Executing migration" id="create dashboard_provisioning" 12:02:14 policy-pap | 12:02:14 kafka | quota.window.num = 11 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.032219927Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=657.534µs 12:02:14 kafka | quota.window.size.seconds = 1 12:02:14 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 12:02:14 policy-pap | [2024-01-23T12:00:11.229+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.03545161Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 12:02:14 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | [2024-01-23T12:00:11.229+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.042791351Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=7.338621ms 12:02:14 kafka | remote.log.manager.task.interval.ms = 30000 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 12:02:14 policy-pap | [2024-01-23T12:00:11.229+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706011211228 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.045882557Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 12:02:14 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | [2024-01-23T12:00:11.231+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-1, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Subscribed to topic(s): policy-pdp-pap 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.046543181Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=663.254µs 12:02:14 kafka | remote.log.manager.task.retry.backoff.ms = 500 12:02:14 policy-pap | [2024-01-23T12:00:11.232+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.049623726Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 12:02:14 kafka | remote.log.manager.task.retry.jitter = 0.2 12:02:14 policy-db-migrator | 12:02:14 policy-pap | allow.auto.create.topics = true 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.050711811Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.087665ms 12:02:14 kafka | remote.log.manager.thread.pool.size = 10 12:02:14 policy-db-migrator | 12:02:14 policy-pap | auto.commit.interval.ms = 5000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.053685222Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 12:02:14 kafka | remote.log.metadata.manager.class.name = null 12:02:14 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 12:02:14 policy-pap | auto.include.jmx.reporter = true 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.054602328Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=916.696µs 12:02:14 kafka | remote.log.metadata.manager.class.path = null 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | auto.offset.reset = latest 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.057635981Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 12:02:14 kafka | remote.log.metadata.manager.impl.prefix = null 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 12:02:14 policy-pap | bootstrap.servers = [kafka:9092] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.057951097Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=315.046µs 12:02:14 kafka | remote.log.metadata.manager.listener.name = null 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | check.crcs = true 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.060955499Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 12:02:14 kafka | remote.log.reader.max.pending.tasks = 100 12:02:14 policy-db-migrator | 12:02:14 policy-pap | client.dns.lookup = use_all_dns_ips 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.061528888Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=573.289µs 12:02:14 kafka | remote.log.reader.threads = 10 12:02:14 policy-db-migrator | 12:02:14 policy-pap | client.id = consumer-policy-pap-2 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.064362101Z level=info msg="Executing migration" id="Add check_sum column" 12:02:14 kafka | remote.log.storage.manager.class.name = null 12:02:14 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 12:02:14 policy-pap | client.rack = 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.066491049Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.128728ms 12:02:14 kafka | remote.log.storage.manager.class.path = null 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | connections.max.idle.ms = 540000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.069402896Z level=info msg="Executing migration" id="Add index for dashboard_title" 12:02:14 kafka | remote.log.storage.manager.impl.prefix = null 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 12:02:14 policy-pap | default.api.timeout.ms = 60000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.070593126Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.18969ms 12:02:14 kafka | remote.log.storage.system.enable = false 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | enable.auto.commit = true 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.074379028Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 12:02:14 kafka | replica.fetch.backoff.ms = 1000 12:02:14 policy-db-migrator | 12:02:14 policy-pap | exclude.internal.topics = true 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.074568577Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=189.95µs 12:02:14 kafka | replica.fetch.max.bytes = 1048576 12:02:14 policy-db-migrator | 12:02:14 policy-pap | fetch.max.bytes = 52428800 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.077464053Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 12:02:14 kafka | replica.fetch.min.bytes = 1 12:02:14 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 12:02:14 policy-pap | fetch.max.wait.ms = 500 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.077660063Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=172.949µs 12:02:14 kafka | replica.fetch.response.max.bytes = 10485760 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | fetch.min.bytes = 1 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.08174695Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 12:02:14 kafka | replica.fetch.wait.max.ms = 500 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 12:02:14 policy-pap | group.id = policy-pap 12:02:14 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 12:02:14 policy-pap | group.instance.id = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.082617084Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=870.084µs 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | replica.lag.time.max.ms = 30000 12:02:14 policy-pap | heartbeat.interval.ms = 3000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.086040577Z level=info msg="Executing migration" id="Add isPublic for dashboard" 12:02:14 policy-db-migrator | 12:02:14 policy-pap | interceptor.classes = [] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.089821638Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.780101ms 12:02:14 policy-db-migrator | 12:02:14 kafka | replica.selector.class = null 12:02:14 policy-pap | internal.leave.group.on.close = true 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.093435671Z level=info msg="Executing migration" id="create data_source table" 12:02:14 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 12:02:14 kafka | replica.socket.receive.buffer.bytes = 65536 12:02:14 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.094314515Z level=info msg="Migration successfully executed" id="create data_source table" duration=876.984µs 12:02:14 kafka | replica.socket.timeout.ms = 30000 12:02:14 policy-pap | isolation.level = read_uncommitted 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.098925828Z level=info msg="Executing migration" id="add index data_source.account_id" 12:02:14 kafka | replication.quota.window.num = 11 12:02:14 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.100020804Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.095495ms 12:02:14 policy-pap | max.partition.fetch.bytes = 1048576 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.103448217Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 12:02:14 kafka | replication.quota.window.size.seconds = 1 12:02:14 policy-pap | max.poll.interval.ms = 300000 12:02:14 policy-db-migrator | 12:02:14 kafka | request.timeout.ms = 30000 12:02:14 policy-pap | max.poll.records = 500 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.104455678Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.007131ms 12:02:14 kafka | reserved.broker.max.id = 1000 12:02:14 policy-pap | metadata.max.age.ms = 300000 12:02:14 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.107699432Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 12:02:14 policy-pap | metric.reporters = [] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.108584816Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=889.005µs 12:02:14 kafka | sasl.client.callback.handler.class = null 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | metrics.num.samples = 2 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.112874463Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 12:02:14 kafka | sasl.enabled.mechanisms = [GSSAPI] 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 12:02:14 policy-pap | metrics.recording.level = INFO 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.114160908Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.279545ms 12:02:14 kafka | sasl.jaas.config = null 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | metrics.sample.window.ms = 30000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.117860315Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 12:02:14 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 12:02:14 policy-db-migrator | 12:02:14 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.128910274Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=11.048388ms 12:02:14 policy-db-migrator | 12:02:14 policy-pap | receive.buffer.bytes = 65536 12:02:14 kafka | sasl.kerberos.min.time.before.relogin = 60000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.154867216Z level=info msg="Executing migration" id="create data_source table v2" 12:02:14 policy-pap | reconnect.backoff.max.ms = 1000 12:02:14 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 12:02:14 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.156576242Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.706817ms 12:02:14 policy-pap | reconnect.backoff.ms = 50 12:02:14 kafka | sasl.kerberos.service.name = null 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | request.timeout.ms = 30000 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.161439888Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 12:02:14 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 12:02:14 policy-pap | retry.backoff.ms = 100 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.162370455Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=930.037µs 12:02:14 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 12:02:14 policy-pap | sasl.client.callback.handler.class = null 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.16583946Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 12:02:14 kafka | sasl.login.callback.handler.class = null 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.166737526Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=897.705µs 12:02:14 policy-pap | sasl.jaas.config = null 12:02:14 kafka | sasl.login.class = null 12:02:14 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.172287676Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 12:02:14 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 12:02:14 kafka | sasl.login.connect.timeout.ms = null 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.173196292Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=906.716µs 12:02:14 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 12:02:14 kafka | sasl.login.read.timeout.ms = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.177221715Z level=info msg="Executing migration" id="Add column with_credentials" 12:02:14 policy-pap | sasl.kerberos.service.name = null 12:02:14 kafka | sasl.login.refresh.buffer.seconds = 300 12:02:14 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.181364415Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=4.1419ms 12:02:14 kafka | sasl.login.refresh.min.period.seconds = 60 12:02:14 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.193698368Z level=info msg="Executing migration" id="Add secure json data column" 12:02:14 kafka | sasl.login.refresh.window.factor = 0.8 12:02:14 policy-pap | sasl.login.callback.handler.class = null 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.196377754Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.720578ms 12:02:14 kafka | sasl.login.refresh.window.jitter = 0.05 12:02:14 kafka | sasl.login.retry.backoff.max.ms = 10000 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.2370666Z level=info msg="Executing migration" id="Update data_source table charset" 12:02:14 kafka | sasl.login.retry.backoff.ms = 100 12:02:14 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.237158525Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=95.035µs 12:02:14 policy-pap | sasl.login.class = null 12:02:14 kafka | sasl.mechanism.controller.protocol = GSSAPI 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.249827735Z level=info msg="Executing migration" id="Update initial version to 1" 12:02:14 policy-pap | sasl.login.connect.timeout.ms = null 12:02:14 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.250158712Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=334.417µs 12:02:14 policy-pap | sasl.login.read.timeout.ms = null 12:02:14 kafka | sasl.oauthbearer.clock.skew.seconds = 30 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.261306795Z level=info msg="Executing migration" id="Add read_only data column" 12:02:14 policy-pap | sasl.login.refresh.buffer.seconds = 300 12:02:14 kafka | sasl.oauthbearer.expected.audience = null 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.266014353Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=4.707908ms 12:02:14 policy-pap | sasl.login.refresh.min.period.seconds = 60 12:02:14 kafka | sasl.oauthbearer.expected.issuer = null 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.272497081Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 12:02:14 policy-pap | sasl.login.refresh.window.factor = 0.8 12:02:14 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 12:02:14 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.272736413Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=240.352µs 12:02:14 policy-pap | sasl.login.refresh.window.jitter = 0.05 12:02:14 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.282423862Z level=info msg="Executing migration" id="Update json_data with nulls" 12:02:14 policy-pap | sasl.login.retry.backoff.max.ms = 10000 12:02:14 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.282718586Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=295.975µs 12:02:14 policy-pap | sasl.login.retry.backoff.ms = 100 12:02:14 kafka | sasl.oauthbearer.jwks.endpoint.url = null 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.291458098Z level=info msg="Executing migration" id="Add uid column" 12:02:14 policy-pap | sasl.mechanism = GSSAPI 12:02:14 kafka | sasl.oauthbearer.scope.claim.name = scope 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.298625Z level=info msg="Migration successfully executed" id="Add uid column" duration=7.161142ms 12:02:14 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 12:02:14 kafka | sasl.oauthbearer.sub.claim.name = sub 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.322259995Z level=info msg="Executing migration" id="Update uid value" 12:02:14 policy-pap | sasl.oauthbearer.expected.audience = null 12:02:14 kafka | sasl.oauthbearer.token.endpoint.url = null 12:02:14 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.322474536Z level=info msg="Migration successfully executed" id="Update uid value" duration=216.591µs 12:02:14 policy-pap | sasl.oauthbearer.expected.issuer = null 12:02:14 kafka | sasl.server.callback.handler.class = null 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.326555042Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 12:02:14 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 12:02:14 kafka | sasl.server.max.receive.size = 524288 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.327426956Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=871.964µs 12:02:14 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 12:02:14 kafka | security.inter.broker.protocol = PLAINTEXT 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.331014217Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 12:02:14 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 12:02:14 kafka | security.providers = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.332689192Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.690406ms 12:02:14 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 12:02:14 policy-db-migrator | 12:02:14 kafka | server.max.startup.time.ms = 9223372036854775807 12:02:14 policy-pap | sasl.oauthbearer.scope.claim.name = scope 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.342915539Z level=info msg="Executing migration" id="create api_key table" 12:02:14 kafka | socket.connection.setup.timeout.max.ms = 30000 12:02:14 policy-pap | sasl.oauthbearer.sub.claim.name = sub 12:02:14 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.343846166Z level=info msg="Migration successfully executed" id="create api_key table" duration=930.857µs 12:02:14 policy-pap | sasl.oauthbearer.token.endpoint.url = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.350535504Z level=info msg="Executing migration" id="add index api_key.account_id" 12:02:14 kafka | socket.connection.setup.timeout.ms = 10000 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | security.protocol = PLAINTEXT 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.35144114Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=900.395µs 12:02:14 kafka | socket.listen.backlog.size = 50 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 12:02:14 policy-pap | security.providers = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.355747697Z level=info msg="Executing migration" id="add index api_key.key" 12:02:14 kafka | socket.receive.buffer.bytes = 102400 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | send.buffer.bytes = 131072 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.356704016Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=956.939µs 12:02:14 kafka | socket.request.max.bytes = 104857600 12:02:14 policy-db-migrator | 12:02:14 policy-pap | session.timeout.ms = 45000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.359904498Z level=info msg="Executing migration" id="add index api_key.account_id_name" 12:02:14 kafka | socket.send.buffer.bytes = 102400 12:02:14 policy-db-migrator | 12:02:14 policy-pap | socket.connection.setup.timeout.max.ms = 30000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.360885047Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=977.33µs 12:02:14 kafka | ssl.cipher.suites = [] 12:02:14 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 12:02:14 policy-pap | socket.connection.setup.timeout.ms = 10000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.367938834Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 12:02:14 kafka | ssl.client.auth = none 12:02:14 policy-pap | ssl.cipher.suites = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.368811058Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=872.394µs 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 12:02:14 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.371767917Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 12:02:14 kafka | ssl.endpoint.identification.algorithm = https 12:02:14 policy-pap | ssl.endpoint.identification.algorithm = https 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.372629121Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=859.794µs 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | ssl.engine.factory.class = null 12:02:14 policy-pap | ssl.engine.factory.class = null 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.378309498Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 12:02:14 kafka | ssl.key.password = null 12:02:14 policy-pap | ssl.key.password = null 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.379183042Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=875.084µs 12:02:14 kafka | ssl.keymanager.algorithm = SunX509 12:02:14 policy-pap | ssl.keymanager.algorithm = SunX509 12:02:14 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.390170847Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 12:02:14 kafka | ssl.keystore.certificate.chain = null 12:02:14 policy-pap | ssl.keystore.certificate.chain = null 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.400335721Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=10.166574ms 12:02:14 kafka | ssl.keystore.key = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.406138894Z level=info msg="Executing migration" id="create api_key table v2" 12:02:14 policy-pap | ssl.keystore.key = null 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.406798228Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=661.414µs 12:02:14 policy-pap | ssl.keystore.location = null 12:02:14 kafka | ssl.keystore.location = null 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.410366248Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 12:02:14 policy-pap | ssl.keystore.password = null 12:02:14 kafka | ssl.keystore.password = null 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.41178921Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.423782ms 12:02:14 kafka | ssl.keystore.type = JKS 12:02:14 policy-db-migrator | 12:02:14 policy-pap | ssl.keystore.type = JKS 12:02:14 kafka | ssl.principal.mapping.rules = DEFAULT 12:02:14 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 12:02:14 policy-pap | ssl.protocol = TLSv1.3 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.416359551Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 12:02:14 kafka | ssl.protocol = TLSv1.3 12:02:14 policy-pap | ssl.provider = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.417774712Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.404661ms 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.461140054Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 12:02:14 kafka | ssl.provider = null 12:02:14 policy-pap | ssl.secure.random.implementation = null 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.463261031Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=2.123587ms 12:02:14 kafka | ssl.secure.random.implementation = null 12:02:14 policy-pap | ssl.trustmanager.algorithm = PKIX 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | ssl.truststore.certificates = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.468663945Z level=info msg="Executing migration" id="copy api_key v1 to v2" 12:02:14 kafka | ssl.trustmanager.algorithm = PKIX 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.469435034Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=771.619µs 12:02:14 kafka | ssl.truststore.certificates = null 12:02:14 policy-pap | ssl.truststore.location = null 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.473055867Z level=info msg="Executing migration" id="Drop old table api_key_v1" 12:02:14 kafka | ssl.truststore.location = null 12:02:14 policy-pap | ssl.truststore.password = null 12:02:14 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 12:02:14 kafka | ssl.truststore.password = null 12:02:14 policy-pap | ssl.truststore.type = JKS 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.474079978Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=1.024692ms 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | ssl.truststore.type = JKS 12:02:14 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.480727604Z level=info msg="Executing migration" id="Update api_key table charset" 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 12:02:14 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 12:02:14 policy-pap | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.480808318Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=79.384µs 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | transaction.max.timeout.ms = 900000 12:02:14 policy-pap | [2024-01-23T12:00:11.238+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.485370169Z level=info msg="Executing migration" id="Add expires to api_key table" 12:02:14 policy-db-migrator | 12:02:14 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 12:02:14 policy-pap | [2024-01-23T12:00:11.238+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.488611723Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=3.241244ms 12:02:14 policy-db-migrator | 12:02:14 kafka | transaction.state.log.load.buffer.size = 5242880 12:02:14 policy-pap | [2024-01-23T12:00:11.238+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706011211238 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.494416936Z level=info msg="Executing migration" id="Add service account foreign key" 12:02:14 policy-db-migrator | > upgrade 0450-pdpgroup.sql 12:02:14 kafka | transaction.state.log.min.isr = 2 12:02:14 policy-pap | [2024-01-23T12:00:11.238+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.500132485Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=5.717109ms 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | transaction.state.log.num.partitions = 50 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.50655467Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 12:02:14 kafka | transaction.state.log.replication.factor = 3 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 12:02:14 policy-pap | [2024-01-23T12:00:11.576+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.506683106Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=128.696µs 12:02:14 kafka | transaction.state.log.segment.bytes = 104857600 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | [2024-01-23T12:00:11.743+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.519390398Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.523440093Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=4.045065ms 12:02:14 policy-db-migrator | 12:02:14 policy-pap | [2024-01-23T12:00:12.032+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@1cdad619, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@319058ce, org.springframework.security.web.context.SecurityContextHolderFilter@1fa796a4, org.springframework.security.web.header.HeaderWriterFilter@3879feec, org.springframework.security.web.authentication.logout.LogoutFilter@259c6ab8, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@13018f00, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@8dcacf1, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@73c09a98, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@3909308c, org.springframework.security.web.access.ExceptionTranslationFilter@280c3dc0, org.springframework.security.web.access.intercept.AuthorizationFilter@44a9971f] 12:02:14 kafka | transactional.id.expiration.ms = 604800000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.528800494Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 12:02:14 policy-pap | [2024-01-23T12:00:12.932+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 12:02:14 kafka | unclean.leader.election.enable = false 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.53130329Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.502476ms 12:02:14 policy-pap | [2024-01-23T12:00:13.034+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 12:02:14 kafka | unstable.api.versions.enable = false 12:02:14 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.541503926Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 12:02:14 policy-pap | [2024-01-23T12:00:13.059+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 12:02:14 kafka | zookeeper.clientCnxnSocket = null 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.542593011Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=1.088905ms 12:02:14 policy-pap | [2024-01-23T12:00:13.078+00:00|INFO|ServiceManager|main] Policy PAP starting 12:02:14 kafka | zookeeper.connect = zookeeper:2181 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.64546347Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 12:02:14 policy-pap | [2024-01-23T12:00:13.078+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 12:02:14 kafka | zookeeper.connection.timeout.ms = null 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.645967486Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=507.056µs 12:02:14 policy-pap | [2024-01-23T12:00:13.078+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 12:02:14 kafka | zookeeper.max.in.flight.requests = 10 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.65219169Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 12:02:14 policy-pap | [2024-01-23T12:00:13.079+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 12:02:14 kafka | zookeeper.metadata.migration.enable = false 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.653014312Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=822.472µs 12:02:14 policy-pap | [2024-01-23T12:00:13.079+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 12:02:14 kafka | zookeeper.session.timeout.ms = 18000 12:02:14 policy-db-migrator | > upgrade 0470-pdp.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.658849877Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 12:02:14 policy-pap | [2024-01-23T12:00:13.080+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 12:02:14 kafka | zookeeper.set.acl = false 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.660708811Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.858424ms 12:02:14 policy-pap | [2024-01-23T12:00:13.080+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 12:02:14 kafka | zookeeper.ssl.cipher.suites = null 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.664328484Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 12:02:14 policy-pap | [2024-01-23T12:00:13.086+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=7faaa365-1216-4c85-9c2d-e9bca189fc3d, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@166d576b 12:02:14 kafka | zookeeper.ssl.client.enable = false 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.665126654Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=798.47µs 12:02:14 policy-pap | [2024-01-23T12:00:13.097+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=7faaa365-1216-4c85-9c2d-e9bca189fc3d, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 12:02:14 kafka | zookeeper.ssl.crl.enable = false 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.669288515Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 12:02:14 policy-pap | [2024-01-23T12:00:13.098+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 12:02:14 kafka | zookeeper.ssl.enabled.protocols = null 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.670133577Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=844.853µs 12:02:14 policy-pap | allow.auto.create.topics = true 12:02:14 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 12:02:14 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.675882578Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 12:02:14 policy-pap | auto.commit.interval.ms = 5000 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | zookeeper.ssl.keystore.location = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.675952411Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=75.474µs 12:02:14 policy-pap | auto.include.jmx.reporter = true 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 12:02:14 kafka | zookeeper.ssl.keystore.password = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.679611916Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 12:02:14 policy-pap | auto.offset.reset = latest 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | zookeeper.ssl.keystore.type = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.679632837Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=24.001µs 12:02:14 policy-pap | bootstrap.servers = [kafka:9092] 12:02:14 policy-db-migrator | 12:02:14 kafka | zookeeper.ssl.ocsp.enable = false 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.684859312Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 12:02:14 policy-pap | check.crcs = true 12:02:14 policy-db-migrator | 12:02:14 kafka | zookeeper.ssl.protocol = TLSv1.2 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.687748928Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.890236ms 12:02:14 policy-pap | client.dns.lookup = use_all_dns_ips 12:02:14 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 12:02:14 kafka | zookeeper.ssl.truststore.location = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.69373105Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 12:02:14 policy-pap | client.id = consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | zookeeper.ssl.truststore.password = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.696401005Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.674275ms 12:02:14 policy-pap | client.rack = 12:02:14 kafka | zookeeper.ssl.truststore.type = null 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.70481878Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 12:02:14 policy-pap | connections.max.idle.ms = 540000 12:02:14 kafka | (kafka.server.KafkaConfig) 12:02:14 kafka | [2024-01-23 11:59:43,847] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.704905705Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=85.734µs 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.710044704Z level=info msg="Executing migration" id="create quota table v1" 12:02:14 policy-pap | default.api.timeout.ms = 60000 12:02:14 kafka | [2024-01-23 11:59:43,851] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.711171971Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.127417ms 12:02:14 policy-pap | enable.auto.commit = true 12:02:14 kafka | [2024-01-23 11:59:43,852] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.714459438Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 11:59:43,854] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 12:02:14 policy-pap | exclude.internal.topics = true 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.715616066Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.157649ms 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 11:59:43,887] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.720972977Z level=info msg="Executing migration" id="Update quota table charset" 12:02:14 kafka | [2024-01-23 11:59:43,893] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.721002828Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=31.081µs 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.723964118Z level=info msg="Executing migration" id="create plugin_setting table" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.724666243Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=701.945µs 12:02:14 kafka | [2024-01-23 11:59:43,904] INFO Loaded 0 logs in 16ms (kafka.log.LogManager) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.727668085Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 12:02:14 policy-pap | fetch.max.bytes = 52428800 12:02:14 policy-pap | fetch.max.wait.ms = 500 12:02:14 kafka | [2024-01-23 11:59:43,905] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.72915603Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.481435ms 12:02:14 policy-db-migrator | 12:02:14 policy-pap | fetch.min.bytes = 1 12:02:14 kafka | [2024-01-23 11:59:43,906] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.735103151Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 12:02:14 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 12:02:14 policy-pap | group.id = 7faaa365-1216-4c85-9c2d-e9bca189fc3d 12:02:14 kafka | [2024-01-23 11:59:43,916] INFO Starting the log cleaner (kafka.log.LogCleaner) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.737113823Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.010521ms 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | group.instance.id = null 12:02:14 kafka | [2024-01-23 11:59:43,966] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.739958596Z level=info msg="Executing migration" id="Update plugin_setting table charset" 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 12:02:14 policy-pap | heartbeat.interval.ms = 3000 12:02:14 kafka | [2024-01-23 11:59:43,983] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.739978247Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=20.171µs 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | interceptor.classes = [] 12:02:14 kafka | [2024-01-23 11:59:44,017] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.743692445Z level=info msg="Executing migration" id="create session table" 12:02:14 policy-db-migrator | 12:02:14 policy-pap | internal.leave.group.on.close = true 12:02:14 kafka | [2024-01-23 11:59:44,080] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.744438103Z level=info msg="Migration successfully executed" id="create session table" duration=745.308µs 12:02:14 policy-db-migrator | 12:02:14 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 12:02:14 kafka | [2024-01-23 11:59:44,447] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.748856036Z level=info msg="Executing migration" id="Drop old table playlist table" 12:02:14 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 12:02:14 policy-pap | isolation.level = read_uncommitted 12:02:14 kafka | [2024-01-23 11:59:44,473] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.74894074Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=85.034µs 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 12:02:14 kafka | [2024-01-23 11:59:44,474] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.751569433Z level=info msg="Executing migration" id="Drop old table playlist_item table" 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 12:02:14 policy-pap | max.partition.fetch.bytes = 1048576 12:02:14 kafka | [2024-01-23 11:59:44,479] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.751649277Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=80.004µs 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | max.poll.interval.ms = 300000 12:02:14 kafka | [2024-01-23 11:59:44,483] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.753850809Z level=info msg="Executing migration" id="create playlist table v2" 12:02:14 policy-db-migrator | 12:02:14 policy-pap | max.poll.records = 500 12:02:14 kafka | [2024-01-23 11:59:44,502] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.754487401Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=636.043µs 12:02:14 policy-pap | metadata.max.age.ms = 300000 12:02:14 kafka | [2024-01-23 11:59:44,504] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 12:02:14 kafka | [2024-01-23 11:59:44,506] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.757347865Z level=info msg="Executing migration" id="create playlist item table v2" 12:02:14 kafka | [2024-01-23 11:59:44,506] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 12:02:14 kafka | [2024-01-23 11:59:44,521] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 12:02:14 policy-pap | metric.reporters = [] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.758005889Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=658.103µs 12:02:14 kafka | [2024-01-23 11:59:44,541] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 12:02:14 kafka | [2024-01-23 11:59:44,579] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1706011184553,1706011184553,1,0,0,72057612285313025,258,0,27 12:02:14 policy-pap | metrics.num.samples = 2 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.764343679Z level=info msg="Executing migration" id="Update playlist table charset" 12:02:14 kafka | (kafka.zk.KafkaZkClient) 12:02:14 kafka | [2024-01-23 11:59:44,581] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 12:02:14 policy-pap | metrics.recording.level = INFO 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.764379951Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=38.422µs 12:02:14 kafka | [2024-01-23 11:59:44,664] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 12:02:14 kafka | [2024-01-23 11:59:44,671] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 12:02:14 policy-pap | metrics.sample.window.ms = 30000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.767360211Z level=info msg="Executing migration" id="Update playlist_item table charset" 12:02:14 kafka | [2024-01-23 11:59:44,682] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 12:02:14 kafka | [2024-01-23 11:59:44,682] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 12:02:14 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.767408944Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=49.303µs 12:02:14 kafka | [2024-01-23 11:59:44,685] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 12:02:14 kafka | [2024-01-23 11:59:44,699] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 12:02:14 policy-pap | receive.buffer.bytes = 65536 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.770001075Z level=info msg="Executing migration" id="Add playlist column created_at" 12:02:14 kafka | [2024-01-23 11:59:44,703] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 12:02:14 kafka | [2024-01-23 11:59:44,705] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 12:02:14 policy-pap | reconnect.backoff.max.ms = 1000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.773650219Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.648734ms 12:02:14 kafka | [2024-01-23 11:59:44,706] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 12:02:14 kafka | [2024-01-23 11:59:44,710] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 12:02:14 policy-pap | reconnect.backoff.ms = 50 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.779706585Z level=info msg="Executing migration" id="Add playlist column updated_at" 12:02:14 kafka | [2024-01-23 11:59:44,728] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 12:02:14 kafka | [2024-01-23 11:59:44,732] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 12:02:14 policy-pap | request.timeout.ms = 30000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.781836943Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.132668ms 12:02:14 kafka | [2024-01-23 11:59:44,737] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 12:02:14 kafka | [2024-01-23 11:59:44,739] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 12:02:14 policy-pap | retry.backoff.ms = 100 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.787633156Z level=info msg="Executing migration" id="drop preferences table v2" 12:02:14 kafka | [2024-01-23 11:59:44,739] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) 12:02:14 kafka | [2024-01-23 11:59:44,744] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 12:02:14 policy-pap | sasl.client.callback.handler.class = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.787763263Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=130.996µs 12:02:14 kafka | [2024-01-23 11:59:44,746] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 12:02:14 kafka | [2024-01-23 11:59:44,749] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 12:02:14 policy-pap | sasl.jaas.config = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.79126581Z level=info msg="Executing migration" id="drop preferences table v3" 12:02:14 kafka | [2024-01-23 11:59:44,766] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 12:02:14 kafka | [2024-01-23 11:59:44,772] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 12:02:14 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.791391396Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=122.847µs 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 11:59:44,774] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 12:02:14 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.803444715Z level=info msg="Executing migration" id="create preferences table v3" 12:02:14 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 12:02:14 kafka | [2024-01-23 11:59:44,782] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 12:02:14 policy-pap | sasl.kerberos.service.name = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.804211534Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=798.121µs 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 11:59:44,788] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 12:02:14 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.811241719Z level=info msg="Executing migration" id="Update preferences table charset" 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 12:02:14 kafka | [2024-01-23 11:59:44,790] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 12:02:14 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.811274511Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=33.692µs 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 11:59:44,790] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 12:02:14 policy-pap | sasl.login.callback.handler.class = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.814052291Z level=info msg="Executing migration" id="Add column team_id in preferences" 12:02:14 policy-db-migrator | 12:02:14 policy-pap | sasl.login.class = null 12:02:14 kafka | [2024-01-23 11:59:44,790] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.818101346Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=4.048545ms 12:02:14 policy-db-migrator | 12:02:14 policy-pap | sasl.login.connect.timeout.ms = null 12:02:14 kafka | [2024-01-23 11:59:44,791] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.821275696Z level=info msg="Executing migration" id="Update team_id column values in preferences" 12:02:14 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 12:02:14 policy-pap | sasl.login.read.timeout.ms = null 12:02:14 kafka | [2024-01-23 11:59:44,794] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.821478927Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=204.03µs 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | sasl.login.refresh.buffer.seconds = 300 12:02:14 kafka | [2024-01-23 11:59:44,795] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.824484428Z level=info msg="Executing migration" id="Add column week_start in preferences" 12:02:14 policy-pap | sasl.login.refresh.min.period.seconds = 60 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.829175196Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=4.689727ms 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 12:02:14 kafka | [2024-01-23 11:59:44,795] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 12:02:14 policy-pap | sasl.login.refresh.window.factor = 0.8 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.834937907Z level=info msg="Executing migration" id="Add column preferences.json_data" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 11:59:44,796] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 12:02:14 policy-pap | sasl.login.refresh.window.jitter = 0.05 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.837327448Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.392751ms 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 11:59:44,798] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 12:02:14 policy-pap | sasl.login.retry.backoff.max.ms = 10000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.840419404Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 11:59:44,802] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 12:02:14 policy-pap | sasl.login.retry.backoff.ms = 100 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.840492638Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=73.733µs 12:02:14 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 12:02:14 kafka | [2024-01-23 11:59:44,804] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 12:02:14 policy-pap | sasl.mechanism = GSSAPI 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.843210735Z level=info msg="Executing migration" id="Add preferences index org_id" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 11:59:44,808] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 12:02:14 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.84409436Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=883.505µs 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 12:02:14 kafka | [2024-01-23 11:59:44,809] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 12:02:14 policy-pap | sasl.oauthbearer.expected.audience = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.849939195Z level=info msg="Executing migration" id="Add preferences index user_id" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 11:59:44,812] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 12:02:14 policy-pap | sasl.oauthbearer.expected.issuer = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.851165697Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.226452ms 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 11:59:44,812] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 12:02:14 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.854940878Z level=info msg="Executing migration" id="create alert table v1" 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 11:59:44,813] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 12:02:14 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 12:02:14 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.856713587Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.772679ms 12:02:14 kafka | [2024-01-23 11:59:44,813] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 12:02:14 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.86051574Z level=info msg="Executing migration" id="add index alert org_id & id " 12:02:14 kafka | [2024-01-23 11:59:44,815] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 12:02:14 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.861817285Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.294326ms 12:02:14 kafka | [2024-01-23 11:59:44,815] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 12:02:14 policy-pap | sasl.oauthbearer.scope.claim.name = scope 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.86586613Z level=info msg="Executing migration" id="add index alert state" 12:02:14 kafka | [2024-01-23 11:59:44,817] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 12:02:14 policy-pap | sasl.oauthbearer.sub.claim.name = sub 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.866953245Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.086905ms 12:02:14 kafka | [2024-01-23 11:59:44,823] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 12:02:14 policy-pap | sasl.oauthbearer.token.endpoint.url = null 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.870109394Z level=info msg="Executing migration" id="add index alert dashboard_id" 12:02:14 kafka | [2024-01-23 11:59:44,826] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 12:02:14 policy-pap | security.protocol = PLAINTEXT 12:02:14 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.871100795Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=990.88µs 12:02:14 kafka | [2024-01-23 11:59:44,828] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 12:02:14 policy-pap | security.providers = null 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.875967531Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 12:02:14 kafka | [2024-01-23 11:59:44,828] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 12:02:14 policy-pap | send.buffer.bytes = 131072 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.876641545Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=674.635µs 12:02:14 kafka | [2024-01-23 11:59:44,828] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 12:02:14 policy-pap | session.timeout.ms = 45000 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.880170873Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 12:02:14 kafka | [2024-01-23 11:59:44,829] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.9:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.881466999Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.299625ms 12:02:14 policy-pap | socket.connection.setup.timeout.max.ms = 30000 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 11:59:44,829] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.886693303Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 12:02:14 policy-pap | socket.connection.setup.timeout.ms = 10000 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 11:59:44,829] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.887793708Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.100685ms 12:02:14 policy-db-migrator | > upgrade 0570-toscadatatype.sql 12:02:14 kafka | [2024-01-23 11:59:44,830] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.892796271Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 11:59:44,831] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) 12:02:14 policy-pap | ssl.cipher.suites = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.906057471Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=13.26012ms 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 12:02:14 kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. 12:02:14 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.908911436Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 12:02:14 policy-pap | ssl.endpoint.identification.algorithm = https 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.909369149Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=457.653µs 12:02:14 policy-db-migrator | 12:02:14 kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) 12:02:14 policy-pap | ssl.engine.factory.class = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.912117808Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 12:02:14 policy-db-migrator | 12:02:14 kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) 12:02:14 policy-pap | ssl.key.password = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.913217423Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.098715ms 12:02:14 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 12:02:14 kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:127) 12:02:14 policy-pap | ssl.keymanager.algorithm = SunX509 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.918817006Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 11:59:44,833] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) 12:02:14 policy-pap | ssl.keystore.certificate.chain = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.919157203Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=339.937µs 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 12:02:14 kafka | [2024-01-23 11:59:44,840] INFO Kafka version: 7.5.3-ccs (org.apache.kafka.common.utils.AppInfoParser) 12:02:14 policy-pap | ssl.keystore.key = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.921695802Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 11:59:44,841] INFO Kafka commitId: 9090b26369455a2f335fbb5487fb89675ee406ab (org.apache.kafka.common.utils.AppInfoParser) 12:02:14 policy-pap | ssl.keystore.location = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.922142324Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=446.512µs 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 11:59:44,841] INFO Kafka startTimeMs: 1706011184832 (org.apache.kafka.common.utils.AppInfoParser) 12:02:14 policy-pap | ssl.keystore.password = null 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 11:59:44,843] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.924917275Z level=info msg="Executing migration" id="create alert_notification table v1" 12:02:14 policy-pap | ssl.keystore.type = JKS 12:02:14 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 12:02:14 kafka | [2024-01-23 11:59:44,856] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.926299814Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.38172ms 12:02:14 policy-pap | ssl.protocol = TLSv1.3 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 11:59:44,937] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.932648445Z level=info msg="Executing migration" id="Add column is_default" 12:02:14 policy-pap | ssl.provider = null 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 12:02:14 kafka | [2024-01-23 11:59:45,027] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.938148633Z level=info msg="Migration successfully executed" id="Add column is_default" duration=5.495628ms 12:02:14 policy-pap | ssl.secure.random.implementation = null 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 11:59:45,112] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.944501384Z level=info msg="Executing migration" id="Add column frequency" 12:02:14 policy-pap | ssl.trustmanager.algorithm = PKIX 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 11:59:45,112] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.949788742Z level=info msg="Migration successfully executed" id="Add column frequency" duration=5.286087ms 12:02:14 policy-pap | ssl.truststore.certificates = null 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 11:59:49,857] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.954320261Z level=info msg="Executing migration" id="Add column send_reminder" 12:02:14 policy-pap | ssl.truststore.location = null 12:02:14 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 12:02:14 kafka | [2024-01-23 11:59:49,858] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.958591797Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=4.271135ms 12:02:14 policy-pap | ssl.truststore.password = null 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,628] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.962053902Z level=info msg="Executing migration" id="Add column disable_resolve_message" 12:02:14 policy-pap | ssl.truststore.type = JKS 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 12:02:14 kafka | [2024-01-23 12:00:13,639] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.966188551Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=4.129689ms 12:02:14 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,642] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.971457037Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 12:02:14 policy-pap | 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,642] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.97271706Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.313876ms 12:02:14 policy-pap | [2024-01-23T12:00:13.104+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,720] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(UZhXoIGVRReKBLH6iRv9pA),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(y4LhsVCjShWp08qTM9318g),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.975974095Z level=info msg="Executing migration" id="Update alert table charset" 12:02:14 policy-pap | [2024-01-23T12:00:13.104+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 12:02:14 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 12:02:14 kafka | [2024-01-23 12:00:13,722] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.976089491Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=115.316µs 12:02:14 policy-pap | [2024-01-23T12:00:13.104+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706011213104 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,726] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.980130565Z level=info msg="Executing migration" id="Update alert_notification table charset" 12:02:14 policy-pap | [2024-01-23T12:00:13.105+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Subscribed to topic(s): policy-pdp-pap 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 12:02:14 kafka | [2024-01-23 12:00:13,726] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.980185938Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=60.253µs 12:02:14 policy-pap | [2024-01-23T12:00:13.105+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,726] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.984398941Z level=info msg="Executing migration" id="create notification_journal table v1" 12:02:14 policy-pap | [2024-01-23T12:00:13.105+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=a4c34505-3ec0-419b-8744-c011170ffba7, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@712c9bcf 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.985387551Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=988.75µs 12:02:14 policy-pap | [2024-01-23T12:00:13.105+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=a4c34505-3ec0-419b-8744-c011170ffba7, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,726] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.989643876Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 12:02:14 policy-pap | [2024-01-23T12:00:13.106+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 12:02:14 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 12:02:14 kafka | [2024-01-23 12:00:13,726] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.990693449Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.049183ms 12:02:14 policy-pap | allow.auto.create.topics = true 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.996175996Z level=info msg="Executing migration" id="drop alert_notification_journal" 12:02:14 kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | auto.commit.interval.ms = 5000 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:40.997895323Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.681055ms 12:02:14 kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | auto.include.jmx.reporter = true 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.001891925Z level=info msg="Executing migration" id="create alert_notification_state table v1" 12:02:14 kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | auto.offset.reset = latest 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.002651363Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=759.348µs 12:02:14 kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | bootstrap.servers = [kafka:9092] 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.00794646Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 12:02:14 kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | check.crcs = true 12:02:14 policy-db-migrator | > upgrade 0630-toscanodetype.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.009808553Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.864714ms 12:02:14 kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | client.dns.lookup = use_all_dns_ips 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.014388662Z level=info msg="Executing migration" id="Add for to alert table" 12:02:14 kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | client.id = consumer-policy-pap-4 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.020696488Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=6.306776ms 12:02:14 kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | client.rack = 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.068841378Z level=info msg="Executing migration" id="Add column uid in alert_notification" 12:02:14 kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | connections.max.idle.ms = 540000 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | default.api.timeout.ms = 60000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.075009777Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=6.169869ms 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | enable.auto.commit = true 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.079870661Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 12:02:14 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 12:02:14 kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | exclude.internal.topics = true 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.080017068Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=147.127µs 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | fetch.max.bytes = 52428800 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.08265894Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 12:02:14 kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | fetch.max.wait.ms = 500 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.083307853Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=648.753µs 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | fetch.min.bytes = 1 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.089525454Z level=info msg="Executing migration" id="Remove unique index org_id_name" 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | group.id = policy-pap 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.090118564Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=591.67µs 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | group.instance.id = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.095744195Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 12:02:14 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 12:02:14 kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | heartbeat.interval.ms = 3000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.102251041Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=6.502196ms 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | interceptor.classes = [] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.105300024Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 12:02:14 kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | internal.leave.group.on.close = true 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.105346466Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=47.062µs 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.111741156Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | isolation.level = read_uncommitted 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.113067883Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.326607ms 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.11640513Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 12:02:14 policy-db-migrator | > upgrade 0660-toscaparameter.sql 12:02:14 kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | max.partition.fetch.bytes = 1048576 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.118039802Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.634171ms 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | max.poll.interval.ms = 300000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.122313506Z level=info msg="Executing migration" id="Drop old annotation table v4" 12:02:14 kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | max.poll.records = 500 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.122480194Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=167.379µs 12:02:14 kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | metadata.max.age.ms = 300000 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.128527557Z level=info msg="Executing migration" id="create annotation table v5" 12:02:14 kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | metric.reporters = [] 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.129508876Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=981.039µs 12:02:14 kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | metrics.num.samples = 2 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.135662944Z level=info msg="Executing migration" id="add index annotation 0 v3" 12:02:14 kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | metrics.recording.level = INFO 12:02:14 policy-db-migrator | > upgrade 0670-toscapolicies.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.136853814Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.197169ms 12:02:14 kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | metrics.sample.window.ms = 30000 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.143344359Z level=info msg="Executing migration" id="add index annotation 1 v3" 12:02:14 kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.144421022Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.082254ms 12:02:14 kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | receive.buffer.bytes = 65536 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.147848144Z level=info msg="Executing migration" id="add index annotation 2 v3" 12:02:14 kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | reconnect.backoff.max.ms = 1000 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.148855354Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.026271ms 12:02:14 kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | reconnect.backoff.ms = 50 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.152195822Z level=info msg="Executing migration" id="add index annotation 3 v3" 12:02:14 kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | request.timeout.ms = 30000 12:02:14 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.154319678Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=2.123336ms 12:02:14 kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | retry.backoff.ms = 100 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.160635134Z level=info msg="Executing migration" id="add index annotation 4 v3" 12:02:14 kafka | [2024-01-23 12:00:13,729] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | sasl.client.callback.handler.class = null 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.161994702Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.360018ms 12:02:14 kafka | [2024-01-23 12:00:13,729] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | sasl.jaas.config = null 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.165034654Z level=info msg="Executing migration" id="Update annotation table charset" 12:02:14 kafka | [2024-01-23 12:00:13,729] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.165062656Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=29.482µs 12:02:14 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 12:02:14 kafka | [2024-01-23 12:00:13,729] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.168086587Z level=info msg="Executing migration" id="Add column region_id to annotation table" 12:02:14 policy-pap | sasl.kerberos.service.name = null 12:02:14 kafka | [2024-01-23 12:00:13,729] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-db-migrator | > upgrade 0690-toscapolicy.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.173064856Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.977739ms 12:02:14 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 12:02:14 kafka | [2024-01-23 12:00:13,729] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.176277017Z level=info msg="Executing migration" id="Drop category_id index" 12:02:14 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 12:02:14 kafka | [2024-01-23 12:00:13,729] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.176960752Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=684.164µs 12:02:14 policy-pap | sasl.login.callback.handler.class = null 12:02:14 kafka | [2024-01-23 12:00:13,729] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.182108829Z level=info msg="Executing migration" id="Add column tags to annotation table" 12:02:14 policy-pap | sasl.login.class = null 12:02:14 kafka | [2024-01-23 12:00:13,729] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.185438236Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.333477ms 12:02:14 policy-pap | sasl.login.connect.timeout.ms = null 12:02:14 kafka | [2024-01-23 12:00:13,729] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.189482358Z level=info msg="Executing migration" id="Create annotation_tag table v2" 12:02:14 policy-pap | sasl.login.read.timeout.ms = null 12:02:14 kafka | [2024-01-23 12:00:13,735] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.190190704Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=707.486µs 12:02:14 policy-pap | sasl.login.refresh.buffer.seconds = 300 12:02:14 kafka | [2024-01-23 12:00:13,735] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.193084499Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 12:02:14 policy-pap | sasl.login.refresh.min.period.seconds = 60 12:02:14 kafka | [2024-01-23 12:00:13,735] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.194065048Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=980.089µs 12:02:14 policy-pap | sasl.login.refresh.window.factor = 0.8 12:02:14 kafka | [2024-01-23 12:00:13,735] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-pap | sasl.login.refresh.window.jitter = 0.05 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.199638157Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,735] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-pap | sasl.login.retry.backoff.max.ms = 10000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.200360723Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=721.926µs 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,735] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-pap | sasl.login.retry.backoff.ms = 100 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.203653308Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,735] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-pap | sasl.mechanism = GSSAPI 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.224416887Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=20.762919ms 12:02:14 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 12:02:14 kafka | [2024-01-23 12:00:13,735] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.229013588Z level=info msg="Executing migration" id="Create annotation_tag table v3" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,735] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-pap | sasl.oauthbearer.expected.audience = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.230174026Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=1.152478ms 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 12:02:14 kafka | [2024-01-23 12:00:13,735] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-pap | sasl.oauthbearer.expected.issuer = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.236710823Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.238437449Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.726606ms 12:02:14 policy-db-migrator | 12:02:14 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.242227199Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 12:02:14 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 12:02:14 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.243223579Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=995.46µs 12:02:14 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.246050291Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 12:02:14 policy-pap | sasl.oauthbearer.scope.claim.name = scope 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.246735595Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=684.145µs 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 12:02:14 policy-pap | sasl.oauthbearer.sub.claim.name = sub 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.251982288Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | sasl.oauthbearer.token.endpoint.url = null 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.252198528Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=212.661µs 12:02:14 policy-db-migrator | 12:02:14 policy-pap | security.protocol = PLAINTEXT 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.2558248Z level=info msg="Executing migration" id="Add created time to annotation table" 12:02:14 policy-db-migrator | 12:02:14 policy-pap | security.providers = null 12:02:14 policy-pap | send.buffer.bytes = 131072 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.263444921Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=7.619831ms 12:02:14 policy-db-migrator | > upgrade 0730-toscaproperty.sql 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.266532936Z level=info msg="Executing migration" id="Add updated time to annotation table" 12:02:14 policy-pap | session.timeout.ms = 45000 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.270765938Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.232692ms 12:02:14 policy-pap | socket.connection.setup.timeout.max.ms = 30000 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-pap | socket.connection.setup.timeout.ms = 10000 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.273867473Z level=info msg="Executing migration" id="Add index for created in annotation table" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | ssl.cipher.suites = null 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.274826351Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=962.098µs 12:02:14 policy-db-migrator | 12:02:14 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.279619281Z level=info msg="Executing migration" id="Add index for updated in annotation table" 12:02:14 policy-db-migrator | 12:02:14 policy-pap | ssl.endpoint.identification.algorithm = https 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.280658223Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.036772ms 12:02:14 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.284076474Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-pap | ssl.engine.factory.class = null 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.284463194Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=386.87µs 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-pap | ssl.key.password = null 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.287894745Z level=info msg="Executing migration" id="Add epoch_end column" 12:02:14 policy-pap | ssl.keymanager.algorithm = SunX509 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.292533998Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.640353ms 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | ssl.keystore.certificate.chain = null 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.29797051Z level=info msg="Executing migration" id="Add index for epoch_end" 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-pap | ssl.keystore.key = null 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.299016302Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.045632ms 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-pap | ssl.keystore.location = null 12:02:14 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.304100107Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-pap | ssl.keystore.password = null 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.3043663Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=265.323µs 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-pap | ssl.keystore.type = JKS 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.313887487Z level=info msg="Executing migration" id="Move region to single row" 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-pap | ssl.protocol = TLSv1.3 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.314795352Z level=info msg="Migration successfully executed" id="Move region to single row" duration=908.135µs 12:02:14 policy-pap | ssl.provider = null 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.320496118Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 12:02:14 policy-pap | ssl.secure.random.implementation = null 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.321717489Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.221431ms 12:02:14 policy-pap | ssl.trustmanager.algorithm = PKIX 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.324837625Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | ssl.truststore.certificates = null 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.326209144Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.371049ms 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | ssl.truststore.location = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.329845446Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 policy-pap | ssl.truststore.password = null 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-pap | ssl.truststore.type = JKS 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.330851356Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.00577ms 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.336150391Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 12:02:14 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 12:02:14 policy-db-migrator | > upgrade 0770-toscarequirement.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.338024085Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.874794ms 12:02:14 policy-pap | 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,737] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.342243906Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 12:02:14 kafka | [2024-01-23 12:00:13,737] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:13.110+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.343762183Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.525066ms 12:02:14 kafka | [2024-01-23 12:00:13,737] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:13.110+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.354351683Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 12:02:14 policy-pap | [2024-01-23T12:00:13.110+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706011213110 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,737] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.356363313Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=2.005991ms 12:02:14 policy-db-migrator | 12:02:14 policy-pap | [2024-01-23T12:00:13.111+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 12:02:14 kafka | [2024-01-23 12:00:13,737] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.359978984Z level=info msg="Executing migration" id="Increase tags column to length 4096" 12:02:14 policy-db-migrator | > upgrade 0780-toscarequirements.sql 12:02:14 policy-pap | [2024-01-23T12:00:13.111+00:00|INFO|ServiceManager|main] Policy PAP starting topics 12:02:14 kafka | [2024-01-23 12:00:13,737] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.36009971Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=120.166µs 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | [2024-01-23T12:00:13.111+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=a4c34505-3ec0-419b-8744-c011170ffba7, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 12:02:14 kafka | [2024-01-23 12:00:13,737] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.36348937Z level=info msg="Executing migration" id="create test_data table" 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 12:02:14 policy-pap | [2024-01-23T12:00:13.111+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=7faaa365-1216-4c85-9c2d-e9bca189fc3d, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 12:02:14 kafka | [2024-01-23 12:00:13,737] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.364473129Z level=info msg="Migration successfully executed" id="create test_data table" duration=983.609µs 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | [2024-01-23T12:00:13.111+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e3f1829a-7c06-43ff-a52c-f9eb795609b7, alive=false, publisher=null]]: starting 12:02:14 kafka | [2024-01-23 12:00:13,892] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.369529583Z level=info msg="Executing migration" id="create dashboard_version table v1" 12:02:14 policy-db-migrator | 12:02:14 policy-pap | [2024-01-23T12:00:13.129+00:00|INFO|ProducerConfig|main] ProducerConfig values: 12:02:14 kafka | [2024-01-23 12:00:13,893] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.371235238Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.705326ms 12:02:14 policy-db-migrator | 12:02:14 policy-pap | acks = -1 12:02:14 kafka | [2024-01-23 12:00:13,893] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.378144014Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 12:02:14 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 12:02:14 policy-pap | auto.include.jmx.reporter = true 12:02:14 kafka | [2024-01-23 12:00:13,893] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.37907089Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=920.516µs 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | batch.size = 16384 12:02:14 kafka | [2024-01-23 12:00:13,893] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.382446989Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 12:02:14 policy-pap | bootstrap.servers = [kafka:9092] 12:02:14 kafka | [2024-01-23 12:00:13,893] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.383476861Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.029492ms 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | buffer.memory = 33554432 12:02:14 kafka | [2024-01-23 12:00:13,893] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.390175786Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 12:02:14 policy-db-migrator | 12:02:14 policy-pap | client.dns.lookup = use_all_dns_ips 12:02:14 kafka | [2024-01-23 12:00:13,893] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.390488432Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=312.916µs 12:02:14 policy-db-migrator | 12:02:14 policy-pap | client.id = producer-1 12:02:14 kafka | [2024-01-23 12:00:13,893] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.397535375Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 12:02:14 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 12:02:14 policy-pap | compression.type = none 12:02:14 kafka | [2024-01-23 12:00:13,893] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.398210819Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=675.184µs 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | connections.max.idle.ms = 540000 12:02:14 kafka | [2024-01-23 12:00:13,893] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.402808889Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 12:02:14 policy-pap | delivery.timeout.ms = 120000 12:02:14 kafka | [2024-01-23 12:00:13,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.403170167Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=361.248µs 12:02:14 policy-pap | enable.idempotence = true 12:02:14 kafka | [2024-01-23 12:00:13,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.407169587Z level=info msg="Executing migration" id="create team table" 12:02:14 policy-pap | interceptor.classes = [] 12:02:14 kafka | [2024-01-23 12:00:13,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.407931845Z level=info msg="Migration successfully executed" id="create team table" duration=761.868µs 12:02:14 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 12:02:14 kafka | [2024-01-23 12:00:13,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.413727755Z level=info msg="Executing migration" id="add index team.org_id" 12:02:14 policy-pap | linger.ms = 0 12:02:14 kafka | [2024-01-23 12:00:13,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.414783528Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.054793ms 12:02:14 policy-pap | max.block.ms = 60000 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.419323866Z level=info msg="Executing migration" id="add unique index team_org_id_name" 12:02:14 policy-pap | max.in.flight.requests.per.connection = 5 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 12:02:14 kafka | [2024-01-23 12:00:13,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.421520356Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=2.19534ms 12:02:14 policy-pap | max.request.size = 1048576 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.429281704Z level=info msg="Executing migration" id="Add column uid in team" 12:02:14 policy-pap | metadata.max.age.ms = 300000 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.434701175Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=5.420391ms 12:02:14 policy-pap | metadata.max.idle.ms = 300000 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.501582114Z level=info msg="Executing migration" id="Update uid column values in team" 12:02:14 policy-pap | metric.reporters = [] 12:02:14 kafka | [2024-01-23 12:00:13,895] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.502039937Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=473.464µs 12:02:14 policy-db-migrator | > upgrade 0820-toscatrigger.sql 12:02:14 policy-pap | metrics.num.samples = 2 12:02:14 kafka | [2024-01-23 12:00:13,895] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.514250178Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,895] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 12:02:14 policy-pap | metrics.recording.level = INFO 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.515379105Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.137637ms 12:02:14 kafka | [2024-01-23 12:00:13,895] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | metrics.sample.window.ms = 30000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.633699898Z level=info msg="Executing migration" id="create team member table" 12:02:14 kafka | [2024-01-23 12:00:13,895] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 policy-pap | partitioner.adaptive.partitioning.enable = true 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.634509999Z level=info msg="Migration successfully executed" id="create team member table" duration=813.761µs 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,895] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 policy-pap | partitioner.availability.timeout.ms = 0 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.719594279Z level=info msg="Executing migration" id="add index team_member.org_id" 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,895] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 policy-pap | partitioner.class = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.720338356Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=739.407µs 12:02:14 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 12:02:14 kafka | [2024-01-23 12:00:13,895] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 policy-pap | partitioner.ignore.keys = false 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.818411626Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,895] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 policy-pap | receive.buffer.bytes = 32768 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.820023137Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.613551ms 12:02:14 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 12:02:14 kafka | [2024-01-23 12:00:13,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 policy-pap | reconnect.backoff.max.ms = 1000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.948042186Z level=info msg="Executing migration" id="add index team_member.team_id" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 policy-pap | reconnect.backoff.ms = 50 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:41.949669127Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.629261ms 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 policy-pap | request.timeout.ms = 30000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.014745849Z level=info msg="Executing migration" id="Add column email to team table" 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 policy-pap | retries = 2147483647 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.021126865Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=6.382666ms 12:02:14 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 12:02:14 kafka | [2024-01-23 12:00:13,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 policy-pap | retry.backoff.ms = 100 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.069996122Z level=info msg="Executing migration" id="Add column external to team_member table" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 policy-pap | sasl.client.callback.handler.class = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.076307074Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=6.313222ms 12:02:14 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 12:02:14 kafka | [2024-01-23 12:00:13,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 policy-pap | sasl.jaas.config = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.081957104Z level=info msg="Executing migration" id="Add column permission to team_member table" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.08653612Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.589977ms 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.097061701Z level=info msg="Executing migration" id="create dashboard acl table" 12:02:14 policy-db-migrator | 12:02:14 policy-pap | sasl.kerberos.service.name = null 12:02:14 kafka | [2024-01-23 12:00:13,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.098579716Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.519065ms 12:02:14 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 12:02:14 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 12:02:14 kafka | [2024-01-23 12:00:13,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.103197495Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 12:02:14 kafka | [2024-01-23 12:00:13,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.104706019Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.520405ms 12:02:14 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 12:02:14 policy-pap | sasl.login.callback.handler.class = null 12:02:14 kafka | [2024-01-23 12:00:13,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | sasl.login.class = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.10997837Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 12:02:14 kafka | [2024-01-23 12:00:13,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 policy-pap | sasl.login.connect.timeout.ms = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.110968639Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=990.109µs 12:02:14 policy-db-migrator | 12:02:14 policy-pap | sasl.login.read.timeout.ms = null 12:02:14 kafka | [2024-01-23 12:00:13,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.114172008Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 12:02:14 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 12:02:14 policy-pap | sasl.login.refresh.buffer.seconds = 300 12:02:14 kafka | [2024-01-23 12:00:13,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | sasl.login.refresh.min.period.seconds = 60 12:02:14 kafka | [2024-01-23 12:00:13,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.115084143Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=911.846µs 12:02:14 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 12:02:14 policy-pap | sasl.login.refresh.window.factor = 0.8 12:02:14 kafka | [2024-01-23 12:00:13,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.119439428Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | sasl.login.refresh.window.jitter = 0.05 12:02:14 kafka | [2024-01-23 12:00:13,898] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.12029317Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=859.582µs 12:02:14 policy-db-migrator | 12:02:14 policy-pap | sasl.login.retry.backoff.max.ms = 10000 12:02:14 kafka | [2024-01-23 12:00:13,898] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.127697387Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 12:02:14 policy-db-migrator | 12:02:14 policy-pap | sasl.login.retry.backoff.ms = 100 12:02:14 kafka | [2024-01-23 12:00:13,898] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.12918069Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.483344ms 12:02:14 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 12:02:14 policy-pap | sasl.mechanism = GSSAPI 12:02:14 kafka | [2024-01-23 12:00:13,900] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.136035419Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 12:02:14 kafka | [2024-01-23 12:00:13,901] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.137183226Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.147527ms 12:02:14 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 12:02:14 policy-pap | sasl.oauthbearer.expected.audience = null 12:02:14 kafka | [2024-01-23 12:00:13,901] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.142677818Z level=info msg="Executing migration" id="add index dashboard_permission" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | sasl.oauthbearer.expected.issuer = null 12:02:14 kafka | [2024-01-23 12:00:13,901] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.144215614Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.537536ms 12:02:14 policy-db-migrator | 12:02:14 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 12:02:14 kafka | [2024-01-23 12:00:13,901] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.150436081Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 12:02:14 policy-db-migrator | 12:02:14 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 12:02:14 kafka | [2024-01-23 12:00:13,901] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.151092654Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=652.592µs 12:02:14 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 12:02:14 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 12:02:14 kafka | [2024-01-23 12:00:13,901] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.156222608Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 12:02:14 kafka | [2024-01-23 12:00:13,901] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.156543124Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=318.575µs 12:02:14 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 12:02:14 policy-pap | sasl.oauthbearer.scope.claim.name = scope 12:02:14 kafka | [2024-01-23 12:00:13,901] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.160173923Z level=info msg="Executing migration" id="create tag table" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | sasl.oauthbearer.sub.claim.name = sub 12:02:14 kafka | [2024-01-23 12:00:13,901] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.160906919Z level=info msg="Migration successfully executed" id="create tag table" duration=732.816µs 12:02:14 policy-db-migrator | 12:02:14 policy-pap | sasl.oauthbearer.token.endpoint.url = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.165756759Z level=info msg="Executing migration" id="add index tag.key_value" 12:02:14 kafka | [2024-01-23 12:00:13,901] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 policy-pap | security.protocol = PLAINTEXT 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.167247653Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.490864ms 12:02:14 kafka | [2024-01-23 12:00:13,901] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 12:02:14 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 12:02:14 policy-pap | security.providers = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.172198558Z level=info msg="Executing migration" id="create login attempt table" 12:02:14 kafka | [2024-01-23 12:00:13,902] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | send.buffer.bytes = 131072 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.173725134Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.532455ms 12:02:14 kafka | [2024-01-23 12:00:13,902] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 12:02:14 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 12:02:14 policy-pap | socket.connection.setup.timeout.max.ms = 30000 12:02:14 kafka | [2024-01-23 12:00:13,902] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.181069187Z level=info msg="Executing migration" id="add index login_attempt.username" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | socket.connection.setup.timeout.ms = 10000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.182273486Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.204309ms 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,902] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.18740795Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 12:02:14 kafka | [2024-01-23 12:00:13,902] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 12:02:14 policy-pap | ssl.cipher.suites = null 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.1892256Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.81731ms 12:02:14 kafka | [2024-01-23 12:00:13,902] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 12:02:14 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 12:02:14 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 12:02:14 kafka | [2024-01-23 12:00:13,902] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.192803207Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 12:02:14 kafka | [2024-01-23 12:00:13,902] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 12:02:14 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.212606097Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=19.80441ms 12:02:14 policy-pap | ssl.endpoint.identification.algorithm = https 12:02:14 kafka | [2024-01-23 12:00:13,902] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.220790652Z level=info msg="Executing migration" id="create login_attempt v2" 12:02:14 policy-pap | ssl.engine.factory.class = null 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,902] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.221681716Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=891.104µs 12:02:14 policy-pap | ssl.key.password = null 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,903] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 12:02:14 policy-pap | ssl.keymanager.algorithm = SunX509 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.22722507Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 12:02:14 kafka | [2024-01-23 12:00:13,903] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 12:02:14 policy-pap | ssl.keystore.certificate.chain = null 12:02:14 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.228994298Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.768768ms 12:02:14 policy-pap | ssl.keystore.key = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.232585405Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 12:02:14 kafka | [2024-01-23 12:00:13,903] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | ssl.keystore.location = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.233397706Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=839.511µs 12:02:14 kafka | [2024-01-23 12:00:13,903] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 12:02:14 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 12:02:14 policy-pap | ssl.keystore.password = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.23873517Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 12:02:14 kafka | [2024-01-23 12:00:13,903] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | ssl.keystore.type = JKS 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.239445925Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=710.556µs 12:02:14 kafka | [2024-01-23 12:00:13,903] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 policy-pap | ssl.protocol = TLSv1.3 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.242908246Z level=info msg="Executing migration" id="create user auth table" 12:02:14 kafka | [2024-01-23 12:00:13,903] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 policy-pap | ssl.provider = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.244230931Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.322075ms 12:02:14 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 12:02:14 kafka | [2024-01-23 12:00:13,903] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 12:02:14 policy-pap | ssl.secure.random.implementation = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.247910633Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 12:02:14 kafka | [2024-01-23 12:00:13,903] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 12:02:14 policy-pap | ssl.trustmanager.algorithm = PKIX 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.249790536Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.879343ms 12:02:14 kafka | [2024-01-23 12:00:13,903] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 12:02:14 policy-pap | ssl.truststore.certificates = null 12:02:14 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.258275866Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 12:02:14 kafka | [2024-01-23 12:00:13,904] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 12:02:14 policy-pap | ssl.truststore.location = null 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.258382682Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=103.745µs 12:02:14 kafka | [2024-01-23 12:00:13,904] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 12:02:14 policy-pap | ssl.truststore.password = null 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.261879055Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 12:02:14 kafka | [2024-01-23 12:00:13,904] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 12:02:14 policy-pap | ssl.truststore.type = JKS 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.27048467Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=8.615316ms 12:02:14 kafka | [2024-01-23 12:00:13,904] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 12:02:14 policy-pap | transaction.timeout.ms = 60000 12:02:14 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.273599674Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 12:02:14 kafka | [2024-01-23 12:00:13,904] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 12:02:14 policy-pap | transactional.id = null 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.278722238Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.122314ms 12:02:14 kafka | [2024-01-23 12:00:13,904] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 12:02:14 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 12:02:14 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.282020101Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 12:02:14 kafka | [2024-01-23 12:00:13,904] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 12:02:14 policy-pap | 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.287383996Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.363455ms 12:02:14 kafka | [2024-01-23 12:00:13,904] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:13.141+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.29272306Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 12:02:14 kafka | [2024-01-23 12:00:13,904] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:13.161+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.29816501Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.441009ms 12:02:14 kafka | [2024-01-23 12:00:13,904] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:13.161+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 12:02:14 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.30323522Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 12:02:14 kafka | [2024-01-23 12:00:13,905] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:13.161+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706011213161 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.304253531Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.020951ms 12:02:14 kafka | [2024-01-23 12:00:13,905] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:13.161+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e3f1829a-7c06-43ff-a52c-f9eb795609b7, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 12:02:14 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.313486788Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 12:02:14 kafka | [2024-01-23 12:00:13,905] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:13.161+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=de7a1a5b-b823-4e66-b4fb-feb25d317168, alive=false, publisher=null]]: starting 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.321126565Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=7.637958ms 12:02:14 kafka | [2024-01-23 12:00:13,905] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:13.162+00:00|INFO|ProducerConfig|main] ProducerConfig values: 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.325547454Z level=info msg="Executing migration" id="create server_lock table" 12:02:14 kafka | [2024-01-23 12:00:13,905] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 12:02:14 policy-pap | acks = -1 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.326301511Z level=info msg="Migration successfully executed" id="create server_lock table" duration=753.927µs 12:02:14 kafka | [2024-01-23 12:00:13,905] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 12:02:14 policy-pap | auto.include.jmx.reporter = true 12:02:14 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.329800785Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 12:02:14 policy-pap | batch.size = 16384 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,905] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.330790964Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=984.569µs 12:02:14 policy-pap | bootstrap.servers = [kafka:9092] 12:02:14 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 12:02:14 kafka | [2024-01-23 12:00:13,905] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.335927328Z level=info msg="Executing migration" id="create user auth token table" 12:02:14 policy-pap | buffer.memory = 33554432 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,905] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.337177329Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.250302ms 12:02:14 policy-pap | client.dns.lookup = use_all_dns_ips 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,907] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.344512022Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 12:02:14 policy-pap | client.id = producer-2 12:02:14 kafka | [2024-01-23 12:00:13,911] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.346780275Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=2.262072ms 12:02:14 policy-db-migrator | 12:02:14 policy-pap | compression.type = none 12:02:14 kafka | [2024-01-23 12:00:13,925] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.418344055Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 12:02:14 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 12:02:14 policy-pap | connections.max.idle.ms = 540000 12:02:14 kafka | [2024-01-23 12:00:13,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.420400206Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=2.056821ms 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | delivery.timeout.ms = 120000 12:02:14 kafka | [2024-01-23 12:00:13,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.424941061Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 12:02:14 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 12:02:14 policy-pap | enable.idempotence = true 12:02:14 kafka | [2024-01-23 12:00:13,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | interceptor.classes = [] 12:02:14 kafka | [2024-01-23 12:00:13,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.426151961Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.21161ms 12:02:14 policy-db-migrator | 12:02:14 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 12:02:14 kafka | [2024-01-23 12:00:13,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.430002811Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 12:02:14 policy-db-migrator | 12:02:14 policy-pap | linger.ms = 0 12:02:14 kafka | [2024-01-23 12:00:13,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.435936215Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.933544ms 12:02:14 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 12:02:14 policy-pap | max.block.ms = 60000 12:02:14 kafka | [2024-01-23 12:00:13,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.440507961Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | max.in.flight.requests.per.connection = 5 12:02:14 kafka | [2024-01-23 12:00:13,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.441760693Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.252442ms 12:02:14 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 12:02:14 policy-pap | max.request.size = 1048576 12:02:14 kafka | [2024-01-23 12:00:13,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.445377422Z level=info msg="Executing migration" id="create cache_data table" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | metadata.max.age.ms = 300000 12:02:14 kafka | [2024-01-23 12:00:13,930] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.446416703Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.038101ms 12:02:14 policy-db-migrator | 12:02:14 policy-pap | metadata.max.idle.ms = 300000 12:02:14 kafka | [2024-01-23 12:00:13,930] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.45181451Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 12:02:14 policy-db-migrator | 12:02:14 policy-pap | metric.reporters = [] 12:02:14 kafka | [2024-01-23 12:00:13,931] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.454175987Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=2.362037ms 12:02:14 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 12:02:14 policy-pap | metrics.num.samples = 2 12:02:14 kafka | [2024-01-23 12:00:13,931] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.511358336Z level=info msg="Executing migration" id="create short_url table v1" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | metrics.recording.level = INFO 12:02:14 kafka | [2024-01-23 12:00:13,931] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.512530244Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.176878ms 12:02:14 policy-pap | metrics.sample.window.ms = 30000 12:02:14 kafka | [2024-01-23 12:00:13,931] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.517417296Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 12:02:14 policy-pap | partitioner.adaptive.partitioning.enable = true 12:02:14 kafka | [2024-01-23 12:00:13,931] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.518546392Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.129106ms 12:02:14 policy-pap | partitioner.availability.timeout.ms = 0 12:02:14 kafka | [2024-01-23 12:00:13,932] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.522881716Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 12:02:14 policy-pap | partitioner.class = null 12:02:14 kafka | [2024-01-23 12:00:13,932] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.523234914Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=352.807µs 12:02:14 policy-pap | partitioner.ignore.keys = false 12:02:14 kafka | [2024-01-23 12:00:13,932] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.528504274Z level=info msg="Executing migration" id="delete alert_definition table" 12:02:14 policy-pap | receive.buffer.bytes = 32768 12:02:14 kafka | [2024-01-23 12:00:13,932] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.528834121Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=329.286µs 12:02:14 policy-pap | reconnect.backoff.max.ms = 1000 12:02:14 kafka | [2024-01-23 12:00:13,932] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.532881201Z level=info msg="Executing migration" id="recreate alert_definition table" 12:02:14 policy-pap | reconnect.backoff.ms = 50 12:02:14 kafka | [2024-01-23 12:00:13,932] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 policy-pap | request.timeout.ms = 30000 12:02:14 kafka | [2024-01-23 12:00:13,932] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.534348813Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.466952ms 12:02:14 policy-pap | retries = 2147483647 12:02:14 kafka | [2024-01-23 12:00:13,932] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.538279478Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 12:02:14 policy-pap | retry.backoff.ms = 100 12:02:14 kafka | [2024-01-23 12:00:13,932] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.540169531Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.889293ms 12:02:14 policy-pap | sasl.client.callback.handler.class = null 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,932] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.545719356Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 12:02:14 policy-pap | sasl.jaas.config = null 12:02:14 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 12:02:14 kafka | [2024-01-23 12:00:13,932] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.547688013Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.968447ms 12:02:14 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.552461609Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 12:02:14 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.552741063Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=283.194µs 12:02:14 policy-pap | sasl.kerberos.service.name = null 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.556815865Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 12:02:14 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 12:02:14 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 12:02:14 kafka | [2024-01-23 12:00:13,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.558598423Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.782538ms 12:02:14 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.562261484Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 12:02:14 policy-pap | sasl.login.callback.handler.class = null 12:02:14 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 12:02:14 kafka | [2024-01-23 12:00:13,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.563325967Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.064983ms 12:02:14 policy-pap | sasl.login.class = null 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.567607039Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 12:02:14 policy-pap | sasl.login.connect.timeout.ms = null 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.568718634Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.111225ms 12:02:14 policy-pap | sasl.login.read.timeout.ms = null 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.572123942Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 12:02:14 kafka | [2024-01-23 12:00:13,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-pap | sasl.login.refresh.buffer.seconds = 300 12:02:14 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.573274659Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.150347ms 12:02:14 kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-pap | sasl.login.refresh.min.period.seconds = 60 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.577412284Z level=info msg="Executing migration" id="Add column paused in alert_definition" 12:02:14 kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-pap | sasl.login.refresh.window.factor = 0.8 12:02:14 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.587511443Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=10.09536ms 12:02:14 kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-pap | sasl.login.refresh.window.jitter = 0.05 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.590720552Z level=info msg="Executing migration" id="drop alert_definition table" 12:02:14 kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-pap | sasl.login.retry.backoff.max.ms = 10000 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.59149176Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=770.648µs 12:02:14 kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-pap | sasl.login.retry.backoff.ms = 100 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.594569813Z level=info msg="Executing migration" id="delete alert_definition_version table" 12:02:14 kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-pap | sasl.mechanism = GSSAPI 12:02:14 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.594661627Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=91.415µs 12:02:14 kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.59916559Z level=info msg="Executing migration" id="recreate alert_definition_version table" 12:02:14 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 12:02:14 kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.60059099Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.42476ms 12:02:14 policy-pap | sasl.oauthbearer.expected.audience = null 12:02:14 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.604308164Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-pap | sasl.oauthbearer.expected.issuer = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.606016959Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.708485ms 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.609402706Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.610514961Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.111685ms 12:02:14 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 12:02:14 kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.614452136Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.614563862Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=113.545µs 12:02:14 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 12:02:14 kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 12:02:14 policy-pap | sasl.oauthbearer.scope.claim.name = scope 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.617437824Z level=info msg="Executing migration" id="drop alert_definition_version table" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,935] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.618476915Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.038571ms 12:02:14 kafka | [2024-01-23 12:00:13,936] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 12:02:14 policy-pap | sasl.oauthbearer.sub.claim.name = sub 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.625132294Z level=info msg="Executing migration" id="create alert_instance table" 12:02:14 kafka | [2024-01-23 12:00:13,938] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-pap | sasl.oauthbearer.token.endpoint.url = null 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.626683701Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.550717ms 12:02:14 kafka | [2024-01-23 12:00:13,938] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-pap | security.protocol = PLAINTEXT 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.631751292Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 12:02:14 kafka | [2024-01-23 12:00:13,938] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 12:02:14 policy-pap | security.providers = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.63313583Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.385298ms 12:02:14 kafka | [2024-01-23 12:00:13,938] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | send.buffer.bytes = 131072 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.636856084Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 12:02:14 kafka | [2024-01-23 12:00:13,938] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 12:02:14 policy-pap | socket.connection.setup.timeout.max.ms = 30000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.637999941Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.143107ms 12:02:14 kafka | [2024-01-23 12:00:13,938] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | socket.connection.setup.timeout.ms = 10000 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.642741226Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 12:02:14 kafka | [2024-01-23 12:00:13,939] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 policy-pap | ssl.cipher.suites = null 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.650035606Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=7.292691ms 12:02:14 kafka | [2024-01-23 12:00:13,939] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 12:02:14 kafka | [2024-01-23 12:00:13,939] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | > upgrade 0100-pdp.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.653363511Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 12:02:14 policy-pap | ssl.endpoint.identification.algorithm = https 12:02:14 kafka | [2024-01-23 12:00:13,939] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.654128059Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=764.648µs 12:02:14 policy-pap | ssl.engine.factory.class = null 12:02:14 kafka | [2024-01-23 12:00:13,939] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.657398811Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 12:02:14 policy-pap | ssl.key.password = null 12:02:14 kafka | [2024-01-23 12:00:13,939] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.658120246Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=718.965µs 12:02:14 policy-pap | ssl.keymanager.algorithm = SunX509 12:02:14 kafka | [2024-01-23 12:00:13,939] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.665857189Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 12:02:14 policy-pap | ssl.keystore.certificate.chain = null 12:02:14 kafka | [2024-01-23 12:00:13,939] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.706148472Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=40.287513ms 12:02:14 policy-pap | ssl.keystore.key = null 12:02:14 kafka | [2024-01-23 12:00:13,939] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.71581513Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 12:02:14 policy-pap | ssl.keystore.location = null 12:02:14 kafka | [2024-01-23 12:00:13,940] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.753332156Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=37.514586ms 12:02:14 policy-pap | ssl.keystore.password = null 12:02:14 kafka | [2024-01-23 12:00:13,940] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.759355004Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 12:02:14 policy-pap | ssl.keystore.type = JKS 12:02:14 kafka | [2024-01-23 12:00:13,940] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.760059689Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=704.185µs 12:02:14 policy-pap | ssl.protocol = TLSv1.3 12:02:14 kafka | [2024-01-23 12:00:13,940] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.763021496Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 12:02:14 policy-pap | ssl.provider = null 12:02:14 kafka | [2024-01-23 12:00:13,940] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.763942241Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=920.635µs 12:02:14 policy-pap | ssl.secure.random.implementation = null 12:02:14 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.769074345Z level=info msg="Executing migration" id="add current_reason column related to current_state" 12:02:14 policy-pap | ssl.trustmanager.algorithm = PKIX 12:02:14 kafka | [2024-01-23 12:00:13,940] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.777707262Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=8.633997ms 12:02:14 policy-pap | ssl.truststore.certificates = null 12:02:14 kafka | [2024-01-23 12:00:13,940] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.780595855Z level=info msg="Executing migration" id="create alert_rule table" 12:02:14 policy-pap | ssl.truststore.location = null 12:02:14 kafka | [2024-01-23 12:00:13,940] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.781204685Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=608.74µs 12:02:14 policy-pap | ssl.truststore.password = null 12:02:14 kafka | [2024-01-23 12:00:13,940] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.78573985Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 12:02:14 policy-pap | ssl.truststore.type = JKS 12:02:14 kafka | [2024-01-23 12:00:13,940] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.787383011Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.642462ms 12:02:14 policy-pap | transaction.timeout.ms = 60000 12:02:14 kafka | [2024-01-23 12:00:13,940] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.822960911Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 12:02:14 policy-pap | transactional.id = null 12:02:14 kafka | [2024-01-23 12:00:13,941] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.824553Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.591418ms 12:02:14 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 12:02:14 kafka | [2024-01-23 12:00:13,941] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.829600679Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 12:02:14 policy-pap | 12:02:14 kafka | [2024-01-23 12:00:13,941] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.830653481Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.052482ms 12:02:14 policy-pap | [2024-01-23T12:00:13.163+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 12:02:14 kafka | [2024-01-23 12:00:13,941] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.834012478Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 12:02:14 policy-pap | [2024-01-23T12:00:13.166+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 12:02:14 kafka | [2024-01-23 12:00:13,941] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.834080271Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=67.514µs 12:02:14 policy-pap | [2024-01-23T12:00:13.166+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 12:02:14 kafka | [2024-01-23 12:00:13,942] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.839363442Z level=info msg="Executing migration" id="add column for to alert_rule" 12:02:14 policy-pap | [2024-01-23T12:00:13.166+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706011213166 12:02:14 kafka | [2024-01-23 12:00:13,942] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.845159739Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=5.797677ms 12:02:14 kafka | [2024-01-23 12:00:13,942] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 12:02:14 policy-pap | [2024-01-23T12:00:13.166+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=de7a1a5b-b823-4e66-b4fb-feb25d317168, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.848392099Z level=info msg="Executing migration" id="add column annotations to alert_rule" 12:02:14 kafka | [2024-01-23 12:00:13,942] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:13.166+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.854222647Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=5.828718ms 12:02:14 kafka | [2024-01-23 12:00:13,942] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:13.166+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.859400644Z level=info msg="Executing migration" id="add column labels to alert_rule" 12:02:14 kafka | [2024-01-23 12:00:13,943] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:13.169+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.865170659Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.769576ms 12:02:14 policy-pap | [2024-01-23T12:00:13.170+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 12:02:14 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 12:02:14 kafka | [2024-01-23 12:00:13,945] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.870535794Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,945] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:13.186+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,945] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.871421528Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=885.694µs 12:02:14 policy-pap | [2024-01-23T12:00:13.187+00:00|INFO|TimerManager|Thread-9] timer manager update started 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,946] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.873910901Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 12:02:14 policy-pap | [2024-01-23T12:00:13.189+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 12:02:14 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 12:02:14 kafka | [2024-01-23 12:00:13,946] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.875430537Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.535236ms 12:02:14 policy-pap | [2024-01-23T12:00:13.190+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,946] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.904789609Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 12:02:14 policy-pap | [2024-01-23T12:00:13.190+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 12:02:14 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 12:02:14 kafka | [2024-01-23 12:00:13,946] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.914081529Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=9.296239ms 12:02:14 policy-pap | [2024-01-23T12:00:13.190+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,946] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.922584989Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 12:02:14 policy-pap | [2024-01-23T12:00:13.192+00:00|INFO|ServiceManager|main] Policy PAP started 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,946] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.932069728Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=9.486279ms 12:02:14 policy-pap | [2024-01-23T12:00:13.195+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.552 seconds (process running for 11.168) 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,946] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.935598693Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 12:02:14 policy-pap | [2024-01-23T12:00:13.611+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:02:14 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 12:02:14 kafka | [2024-01-23 12:00:13,946] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.936602943Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.000639ms 12:02:14 policy-pap | [2024-01-23T12:00:13.611+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: sXWmytVdQyKDGijCKdambA 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,946] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.940217391Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 12:02:14 policy-pap | [2024-01-23T12:00:13.611+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: sXWmytVdQyKDGijCKdambA 12:02:14 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 12:02:14 kafka | [2024-01-23 12:00:13,946] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.947368185Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=7.150294ms 12:02:14 policy-pap | [2024-01-23T12:00:13.611+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Cluster ID: sXWmytVdQyKDGijCKdambA 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,946] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.951981023Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 12:02:14 policy-pap | [2024-01-23T12:00:13.648+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,995] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.957939578Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.957875ms 12:02:14 policy-pap | [2024-01-23T12:00:13.649+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,995] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.963390228Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 12:02:14 policy-pap | [2024-01-23T12:00:13.709+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:02:14 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 12:02:14 kafka | [2024-01-23 12:00:13,995] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.963460281Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=69.653µs 12:02:14 policy-pap | [2024-01-23T12:00:13.710+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: sXWmytVdQyKDGijCKdambA 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,995] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.967571135Z level=info msg="Executing migration" id="create alert_rule_version table" 12:02:14 policy-pap | [2024-01-23T12:00:13.720+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:02:14 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 12:02:14 kafka | [2024-01-23 12:00:13,995] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.969584084Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=2.012489ms 12:02:14 policy-pap | [2024-01-23T12:00:13.824+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:02:14 policy-db-migrator | JOIN pdpstatistics b 12:02:14 kafka | [2024-01-23 12:00:13,996] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:13.861+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.974434974Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 12:02:14 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 12:02:14 kafka | [2024-01-23 12:00:13,996] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.975458595Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.023361ms 12:02:14 policy-pap | [2024-01-23T12:00:13.941+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:02:14 policy-db-migrator | SET a.id = b.id 12:02:14 kafka | [2024-01-23 12:00:13,996] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.978869794Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 12:02:14 policy-pap | [2024-01-23T12:00:13.975+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,996] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.979949747Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.079473ms 12:02:14 policy-pap | [2024-01-23T12:00:14.048+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.98405878Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:13,996] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:14.086+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:02:14 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 12:02:14 kafka | [2024-01-23 12:00:13,996] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:14.154+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.984122883Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=64.503µs 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | [2024-01-23T12:00:14.192+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:02:14 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 12:02:14 kafka | [2024-01-23 12:00:13,996] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.988434167Z level=info msg="Executing migration" id="add column for to alert_rule_version" 12:02:14 policy-pap | [2024-01-23T12:00:14.260+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:02:14 kafka | [2024-01-23 12:00:13,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.994641254Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.206537ms 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:13,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 policy-pap | [2024-01-23T12:00:14.299+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:42.999894984Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 12:02:14 kafka | [2024-01-23 12:00:13,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:13,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:14.371+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.004781435Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.886961ms 12:02:14 kafka | [2024-01-23 12:00:13,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:14.404+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.007929141Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 12:02:14 kafka | [2024-01-23 12:00:13,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:14.476+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:02:14 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.012318658Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.387697ms 12:02:14 kafka | [2024-01-23 12:00:13,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:14.512+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.016755458Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 12:02:14 kafka | [2024-01-23 12:00:13,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.022886321Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.130323ms 12:02:14 kafka | [2024-01-23 12:00:13,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:14.586+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.026437827Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 12:02:14 kafka | [2024-01-23 12:00:13,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:14.619+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.033051634Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.612997ms 12:02:14 kafka | [2024-01-23 12:00:13,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:14.704+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.038580707Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 12:02:14 kafka | [2024-01-23 12:00:13,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:14.712+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] (Re-)joining group 12:02:14 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.038723495Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=140.767µs 12:02:14 kafka | [2024-01-23 12:00:13,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:14.726+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.042957284Z level=info msg="Executing migration" id=create_alert_configuration_table 12:02:14 kafka | [2024-01-23 12:00:13,999] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:14.728+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 12:02:14 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.043725192Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=767.778µs 12:02:14 kafka | [2024-01-23 12:00:13,999] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:14.756+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-05125a59-907a-47c3-93e1-a990571b604b 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.047276938Z level=info msg="Executing migration" id="Add column default in alert_configuration" 12:02:14 kafka | [2024-01-23 12:00:13,999] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.053938997Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.661119ms 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.05824888Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 12:02:14 policy-pap | [2024-01-23T12:00:14.756+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 12:02:14 kafka | [2024-01-23 12:00:13,999] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 12:02:14 policy-db-migrator | > upgrade 0210-sequence.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.058320914Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=73.034µs 12:02:14 policy-pap | [2024-01-23T12:00:14.756+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 12:02:14 kafka | [2024-01-23 12:00:13,999] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.061776635Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 12:02:14 policy-pap | [2024-01-23T12:00:14.758+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Request joining group due to: need to re-join with the given member-id: consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3-c639ff7a-2705-4b68-b804-62b68552537a 12:02:14 kafka | [2024-01-23 12:00:13,999] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.07258955Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=10.811835ms 12:02:14 policy-pap | [2024-01-23T12:00:14.758+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 12:02:14 kafka | [2024-01-23 12:00:13,999] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.080453519Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 12:02:14 policy-pap | [2024-01-23T12:00:14.758+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] (Re-)joining group 12:02:14 kafka | [2024-01-23 12:00:13,999] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.081182805Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=728.906µs 12:02:14 policy-pap | [2024-01-23T12:00:17.785+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-05125a59-907a-47c3-93e1-a990571b604b', protocol='range'} 12:02:14 kafka | [2024-01-23 12:00:13,999] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.084487918Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 12:02:14 policy-pap | [2024-01-23T12:00:17.787+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Successfully joined group with generation Generation{generationId=1, memberId='consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3-c639ff7a-2705-4b68-b804-62b68552537a', protocol='range'} 12:02:14 kafka | [2024-01-23 12:00:14,000] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 12:02:14 policy-db-migrator | > upgrade 0220-sequence.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.090960349Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.4712ms 12:02:14 policy-pap | [2024-01-23T12:00:17.799+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Finished assignment for group at generation 1: {consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3-c639ff7a-2705-4b68-b804-62b68552537a=Assignment(partitions=[policy-pdp-pap-0])} 12:02:14 kafka | [2024-01-23 12:00:14,000] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.094853151Z level=info msg="Executing migration" id=create_ngalert_configuration_table 12:02:14 policy-pap | [2024-01-23T12:00:17.799+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-05125a59-907a-47c3-93e1-a990571b604b=Assignment(partitions=[policy-pdp-pap-0])} 12:02:14 kafka | [2024-01-23 12:00:14,000] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 12:02:14 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.095615539Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=762.798µs 12:02:14 kafka | [2024-01-23 12:00:14,000] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:17.841+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Successfully synced group in generation Generation{generationId=1, memberId='consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3-c639ff7a-2705-4b68-b804-62b68552537a', protocol='range'} 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.100593845Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 12:02:14 kafka | [2024-01-23 12:00:14,000] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:17.842+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-05125a59-907a-47c3-93e1-a990571b604b', protocol='range'} 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.102154292Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.558497ms 12:02:14 kafka | [2024-01-23 12:00:14,000] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:17.842+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.106053335Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 12:02:14 kafka | [2024-01-23 12:00:14,000] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:17.843+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 12:02:14 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.112735836Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.683781ms 12:02:14 kafka | [2024-01-23 12:00:14,001] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:17.849+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Adding newly assigned partitions: policy-pdp-pap-0 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.118723382Z level=info msg="Executing migration" id="create provenance_type table" 12:02:14 policy-pap | [2024-01-23T12:00:17.849+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 12:02:14 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 12:02:14 kafka | [2024-01-23 12:00:14,002] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.119421657Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=698.675µs 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,002] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.123813874Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 12:02:14 policy-pap | [2024-01-23T12:00:17.872+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,002] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.12616032Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=2.343356ms 12:02:14 policy-pap | [2024-01-23T12:00:17.874+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Found no committed offset for partition policy-pdp-pap-0 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,002] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.131948936Z level=info msg="Executing migration" id="create alert_image table" 12:02:14 policy-pap | [2024-01-23T12:00:17.898+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 12:02:14 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 12:02:14 kafka | [2024-01-23 12:00:14,003] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.133196628Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.248972ms 12:02:14 policy-pap | [2024-01-23T12:00:17.898+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,003] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.136418887Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 12:02:14 policy-pap | [2024-01-23T12:00:22.059+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 12:02:14 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 12:02:14 kafka | [2024-01-23 12:00:14,004] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.137512692Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.091524ms 12:02:14 policy-pap | [2024-01-23T12:00:22.059+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,004] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.142291938Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 12:02:14 policy-pap | [2024-01-23T12:00:22.062+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 3 ms 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,005] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.142359861Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=67.273µs 12:02:14 policy-pap | [2024-01-23T12:00:34.791+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,006] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.145473405Z level=info msg="Executing migration" id=create_alert_configuration_history_table 12:02:14 policy-pap | [] 12:02:14 policy-db-migrator | > upgrade 0120-toscatrigger.sql 12:02:14 kafka | [2024-01-23 12:00:14,006] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.146620672Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.147867ms 12:02:14 policy-pap | [2024-01-23T12:00:34.792+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,061] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"467b6bf8-582b-4dbd-92b4-9e245489db39","timestampMs":1706011234753,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup"} 12:02:14 policy-pap | [2024-01-23T12:00:34.792+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.149691044Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 12:02:14 kafka | [2024-01-23 12:00:14,072] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 12:02:14 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"467b6bf8-582b-4dbd-92b4-9e245489db39","timestampMs":1706011234753,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup"} 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.150667882Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=976.748µs 12:02:14 kafka | [2024-01-23 12:00:14,073] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | [2024-01-23T12:00:34.800+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.157671599Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 12:02:14 kafka | [2024-01-23 12:00:14,074] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 12:02:14 policy-pap | [2024-01-23T12:00:34.878+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate starting 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.15810923Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 12:02:14 kafka | [2024-01-23 12:00:14,076] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 policy-pap | [2024-01-23T12:00:34.878+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate starting listener 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.162913088Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 12:02:14 kafka | [2024-01-23 12:00:14,088] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 12:02:14 policy-pap | [2024-01-23T12:00:34.879+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate starting timer 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.163402212Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=488.734µs 12:02:14 kafka | [2024-01-23 12:00:14,090] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | [2024-01-23T12:00:34.879+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c, expireMs=1706011264879] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.166775659Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 12:02:14 kafka | [2024-01-23 12:00:14,090] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 12:02:14 policy-pap | [2024-01-23T12:00:34.881+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c, expireMs=1706011264879] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.168061153Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.284364ms 12:02:14 kafka | [2024-01-23 12:00:14,090] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | [2024-01-23T12:00:34.881+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate starting enqueue 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.173003447Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 12:02:14 kafka | [2024-01-23 12:00:14,090] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-db-migrator | 12:02:14 policy-pap | [2024-01-23T12:00:34.882+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate started 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.181401403Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=8.398596ms 12:02:14 kafka | [2024-01-23 12:00:14,099] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-db-migrator | 12:02:14 policy-pap | [2024-01-23T12:00:34.884+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.185371539Z level=info msg="Executing migration" id="create library_element table v1" 12:02:14 kafka | [2024-01-23 12:00:14,100] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-db-migrator | > upgrade 0140-toscaparameter.sql 12:02:14 policy-pap | {"source":"pap-c9cd1c7c-2e58-4937-84b6-2c31f25c757e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c","timestampMs":1706011234863,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.186337167Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=965.208µs 12:02:14 kafka | [2024-01-23 12:00:14,100] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | [2024-01-23T12:00:34.932+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.19024543Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 12:02:14 kafka | [2024-01-23 12:00:14,100] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 12:02:14 policy-pap | {"source":"pap-c9cd1c7c-2e58-4937-84b6-2c31f25c757e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c","timestampMs":1706011234863,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.192416008Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=2.169577ms 12:02:14 kafka | [2024-01-23 12:00:14,100] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | [2024-01-23T12:00:34.934+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.197895319Z level=info msg="Executing migration" id="create library_element_connection table v1" 12:02:14 kafka | [2024-01-23 12:00:14,108] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-db-migrator | 12:02:14 policy-pap | {"source":"pap-c9cd1c7c-2e58-4937-84b6-2c31f25c757e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c","timestampMs":1706011234863,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.198720219Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=824.55µs 12:02:14 kafka | [2024-01-23 12:00:14,108] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-db-migrator | 12:02:14 policy-pap | [2024-01-23T12:00:34.935+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.203923077Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 12:02:14 kafka | [2024-01-23 12:00:14,108] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | > upgrade 0150-toscaproperty.sql 12:02:14 policy-pap | [2024-01-23T12:00:34.935+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.205842532Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.920205ms 12:02:14 kafka | [2024-01-23 12:00:14,108] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | [2024-01-23T12:00:34.957+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.247566156Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 12:02:14 kafka | [2024-01-23 12:00:14,109] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 12:02:14 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d2388848-0012-45a5-abaf-541938745a99","timestampMs":1706011234943,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup"} 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.249364925Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.795919ms 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,117] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-pap | [2024-01-23T12:00:34.960+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.255718129Z level=info msg="Executing migration" id="increase max description length to 2048" 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,117] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d2388848-0012-45a5-abaf-541938745a99","timestampMs":1706011234943,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup"} 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.25574502Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=27.821µs 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,117] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 12:02:14 policy-pap | [2024-01-23T12:00:34.961+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.259231863Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 12:02:14 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 12:02:14 kafka | [2024-01-23 12:00:14,118] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-pap | [2024-01-23T12:00:34.961+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.259297456Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=66.343µs 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,118] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.262728536Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 12:02:14 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f2f3d3ad-4c80-4136-9424-630cff59eb41","timestampMs":1706011234944,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,127] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.263071413Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=342.677µs 12:02:14 policy-pap | [2024-01-23T12:00:34.982+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,128] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.265539965Z level=info msg="Executing migration" id="create data_keys table" 12:02:14 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f2f3d3ad-4c80-4136-9424-630cff59eb41","timestampMs":1706011234944,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.26645839Z level=info msg="Migration successfully executed" id="create data_keys table" duration=918.205µs 12:02:14 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 12:02:14 kafka | [2024-01-23 12:00:14,128] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 12:02:14 policy-pap | [2024-01-23T12:00:34.982+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate stopping 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.270825276Z level=info msg="Executing migration" id="create secrets table" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,129] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-pap | [2024-01-23T12:00:34.982+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.271625966Z level=info msg="Migration successfully executed" id="create secrets table" duration=800.08µs 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,129] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:34.982+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate stopping enqueue 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.277529128Z level=info msg="Executing migration" id="rename data_keys name column to id" 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,135] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-pap | [2024-01-23T12:00:34.983+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate stopping timer 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.32648618Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=48.956212ms 12:02:14 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 12:02:14 kafka | [2024-01-23 12:00:14,136] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-pap | [2024-01-23T12:00:34.983+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c, expireMs=1706011264879] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.335471544Z level=info msg="Executing migration" id="add name column into data_keys" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,136] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 12:02:14 policy-pap | [2024-01-23T12:00:34.983+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate stopping listener 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.342797907Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=7.328903ms 12:02:14 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 12:02:14 kafka | [2024-01-23 12:00:14,136] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-pap | [2024-01-23T12:00:34.983+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate stopped 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.348195984Z level=info msg="Executing migration" id="copy data_keys id column values into name" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,136] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:34.991+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate successful 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.348341271Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=145.657µs 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,150] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-pap | [2024-01-23T12:00:34.991+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 start publishing next request 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.351781671Z level=info msg="Executing migration" id="rename data_keys name column to label" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,152] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-pap | [2024-01-23T12:00:34.991+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpStateChange starting 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.397886832Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=46.105201ms 12:02:14 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 12:02:14 kafka | [2024-01-23 12:00:14,152] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 12:02:14 policy-pap | [2024-01-23T12:00:34.991+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpStateChange starting listener 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.401209296Z level=info msg="Executing migration" id="rename data_keys id column back to name" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,152] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-pap | [2024-01-23T12:00:34.991+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpStateChange starting timer 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.44655311Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=45.343194ms 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,152] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:34.991+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=d5747120-881d-4c2e-9c54-68eb2a8c3ec9, expireMs=1706011264991] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.451942496Z level=info msg="Executing migration" id="create kv_store table v1" 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,162] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-pap | [2024-01-23T12:00:34.991+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpStateChange starting enqueue 12:02:14 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 12:02:14 kafka | [2024-01-23 12:00:14,163] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-pap | [2024-01-23T12:00:34.991+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpStateChange started 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.453184918Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.242812ms 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,163] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 12:02:14 policy-pap | [2024-01-23T12:00:34.991+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=d5747120-881d-4c2e-9c54-68eb2a8c3ec9, expireMs=1706011264991] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.458822747Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 12:02:14 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 12:02:14 kafka | [2024-01-23 12:00:14,163] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-pap | [2024-01-23T12:00:34.992+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.460876078Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=2.058492ms 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,163] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-pap | {"source":"pap-c9cd1c7c-2e58-4937-84b6-2c31f25c757e","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d5747120-881d-4c2e-9c54-68eb2a8c3ec9","timestampMs":1706011234863,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.464315378Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,170] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 kafka | [2024-01-23 12:00:14,171] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-pap | [2024-01-23T12:00:35.003+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.464779011Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=463.053µs 12:02:14 kafka | [2024-01-23 12:00:14,171] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 12:02:14 policy-pap | {"source":"pap-c9cd1c7c-2e58-4937-84b6-2c31f25c757e","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d5747120-881d-4c2e-9c54-68eb2a8c3ec9","timestampMs":1706011234863,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:02:14 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.469381949Z level=info msg="Executing migration" id="create permission table" 12:02:14 kafka | [2024-01-23 12:00:14,171] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-pap | [2024-01-23T12:00:35.003+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.470277533Z level=info msg="Migration successfully executed" id="create permission table" duration=894.894µs 12:02:14 kafka | [2024-01-23 12:00:14,172] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:35.019+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.477874889Z level=info msg="Executing migration" id="add unique index permission.role_id" 12:02:14 kafka | [2024-01-23 12:00:14,178] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"d5747120-881d-4c2e-9c54-68eb2a8c3ec9","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"9ca47fc4-4a5a-4269-ac4b-8ea5170943ca","timestampMs":1706011235008,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.479613935Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.736016ms 12:02:14 kafka | [2024-01-23 12:00:14,181] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-pap | [2024-01-23T12:00:35.020+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id d5747120-881d-4c2e-9c54-68eb2a8c3ec9 12:02:14 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 12:02:14 kafka | [2024-01-23 12:00:14,181] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 12:02:14 policy-pap | [2024-01-23T12:00:35.037+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.483128449Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,181] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-pap | {"source":"pap-c9cd1c7c-2e58-4937-84b6-2c31f25c757e","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d5747120-881d-4c2e-9c54-68eb2a8c3ec9","timestampMs":1706011234863,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.484990621Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.862082ms 12:02:14 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 12:02:14 kafka | [2024-01-23 12:00:14,181] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:35.037+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.492541945Z level=info msg="Executing migration" id="create role table" 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | [2024-01-23T12:00:35.041+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:02:14 kafka | [2024-01-23 12:00:14,195] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.493392237Z level=info msg="Migration successfully executed" id="create role table" duration=849.322µs 12:02:14 policy-db-migrator | 12:02:14 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"d5747120-881d-4c2e-9c54-68eb2a8c3ec9","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"9ca47fc4-4a5a-4269-ac4b-8ea5170943ca","timestampMs":1706011235008,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:02:14 kafka | [2024-01-23 12:00:14,196] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.498694389Z level=info msg="Executing migration" id="add column display_name" 12:02:14 policy-db-migrator | 12:02:14 policy-pap | [2024-01-23T12:00:35.042+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpStateChange stopping 12:02:14 kafka | [2024-01-23 12:00:14,196] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.510191368Z level=info msg="Migration successfully executed" id="add column display_name" duration=11.499569ms 12:02:14 policy-db-migrator | > upgrade 0100-upgrade.sql 12:02:14 policy-pap | [2024-01-23T12:00:35.042+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpStateChange stopping enqueue 12:02:14 kafka | [2024-01-23 12:00:14,196] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.513183436Z level=info msg="Executing migration" id="add column group_name" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,196] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-db-migrator | select 'upgrade to 1100 completed' as msg 12:02:14 policy-pap | [2024-01-23T12:00:35.042+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpStateChange stopping timer 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.520189212Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.004846ms 12:02:14 kafka | [2024-01-23 12:00:14,204] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-db-migrator | -------------- 12:02:14 policy-pap | [2024-01-23T12:00:35.042+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=d5747120-881d-4c2e-9c54-68eb2a8c3ec9, expireMs=1706011264991] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.523483135Z level=info msg="Executing migration" id="add index role.org_id" 12:02:14 kafka | [2024-01-23 12:00:14,204] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-pap | [2024-01-23T12:00:35.042+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpStateChange stopping listener 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.524957258Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.474493ms 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,204] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 12:02:14 policy-pap | [2024-01-23T12:00:35.042+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpStateChange stopped 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.531080391Z level=info msg="Executing migration" id="add unique index role_org_id_name" 12:02:14 policy-db-migrator | msg 12:02:14 kafka | [2024-01-23 12:00:14,204] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-pap | [2024-01-23T12:00:35.042+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpStateChange successful 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.533104591Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=2.02207ms 12:02:14 policy-db-migrator | upgrade to 1100 completed 12:02:14 kafka | [2024-01-23 12:00:14,204] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:35.042+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 start publishing next request 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.537594453Z level=info msg="Executing migration" id="add index role_org_id_uid" 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,212] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-pap | [2024-01-23T12:00:35.042+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate starting 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.538804813Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.21046ms 12:02:14 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 12:02:14 kafka | [2024-01-23 12:00:14,212] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-pap | [2024-01-23T12:00:35.043+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate starting listener 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.541756569Z level=info msg="Executing migration" id="create team role table" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,213] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.542591571Z level=info msg="Migration successfully executed" id="create team role table" duration=834.781µs 12:02:14 policy-pap | [2024-01-23T12:00:35.043+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate starting timer 12:02:14 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 12:02:14 kafka | [2024-01-23 12:00:14,213] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.548354926Z level=info msg="Executing migration" id="add index team_role.org_id" 12:02:14 policy-pap | [2024-01-23T12:00:35.043+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=63a76968-fae6-4e69-9528-57bfc1bb20a8, expireMs=1706011265043] 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,213] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.549495102Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.140276ms 12:02:14 policy-pap | [2024-01-23T12:00:35.043+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate starting enqueue 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,219] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.555008105Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 12:02:14 policy-pap | [2024-01-23T12:00:35.043+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate started 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,219] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.55714195Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=2.132885ms 12:02:14 policy-pap | [2024-01-23T12:00:35.043+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 12:02:14 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 12:02:14 kafka | [2024-01-23 12:00:14,219] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.560945939Z level=info msg="Executing migration" id="add index team_role.team_id" 12:02:14 policy-pap | {"source":"pap-c9cd1c7c-2e58-4937-84b6-2c31f25c757e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"63a76968-fae6-4e69-9528-57bfc1bb20a8","timestampMs":1706011235028,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.562547998Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.60413ms 12:02:14 policy-pap | [2024-01-23T12:00:35.054+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:02:14 kafka | [2024-01-23 12:00:14,219] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.567925754Z level=info msg="Executing migration" id="create user role table" 12:02:14 policy-pap | {"source":"pap-c9cd1c7c-2e58-4937-84b6-2c31f25c757e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"63a76968-fae6-4e69-9528-57bfc1bb20a8","timestampMs":1706011235028,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:02:14 kafka | [2024-01-23 12:00:14,219] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.568925653Z level=info msg="Migration successfully executed" id="create user role table" duration=999.409µs 12:02:14 kafka | [2024-01-23 12:00:14,227] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-pap | [2024-01-23T12:00:35.054+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.57673488Z level=info msg="Executing migration" id="add index user_role.org_id" 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,227] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-pap | [2024-01-23T12:00:35.055+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.578083916Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.348666ms 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,227] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 12:02:14 policy-pap | {"source":"pap-c9cd1c7c-2e58-4937-84b6-2c31f25c757e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"63a76968-fae6-4e69-9528-57bfc1bb20a8","timestampMs":1706011235028,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.582501885Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 12:02:14 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 12:02:14 kafka | [2024-01-23 12:00:14,227] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-pap | [2024-01-23T12:00:35.055+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.583884203Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.379428ms 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,228] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:35.065+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.589903731Z level=info msg="Executing migration" id="add index user_role.user_id" 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,267] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"63a76968-fae6-4e69-9528-57bfc1bb20a8","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"d7b3019b-93eb-43bb-bab7-dfe71e0e46ae","timestampMs":1706011235053,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.592591034Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=3.01663ms 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,267] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-pap | [2024-01-23T12:00:35.066+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 63a76968-fae6-4e69-9528-57bfc1bb20a8 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.597926618Z level=info msg="Executing migration" id="create builtin role table" 12:02:14 policy-db-migrator | > upgrade 0120-audit_sequence.sql 12:02:14 kafka | [2024-01-23 12:00:14,267] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 12:02:14 policy-pap | [2024-01-23T12:00:35.066+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.599486345Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.559167ms 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,268] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"63a76968-fae6-4e69-9528-57bfc1bb20a8","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"d7b3019b-93eb-43bb-bab7-dfe71e0e46ae","timestampMs":1706011235053,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.603272662Z level=info msg="Executing migration" id="add index builtin_role.role_id" 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 12:02:14 kafka | [2024-01-23 12:00:14,268] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:35.066+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate stopping 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,274] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.604715744Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.443992ms 12:02:14 policy-pap | [2024-01-23T12:00:35.066+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate stopping enqueue 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,274] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.610241517Z level=info msg="Executing migration" id="add index builtin_role.name" 12:02:14 policy-pap | [2024-01-23T12:00:35.066+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate stopping timer 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,274] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.611423406Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.181659ms 12:02:14 policy-pap | [2024-01-23T12:00:35.066+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=63a76968-fae6-4e69-9528-57bfc1bb20a8, expireMs=1706011265043] 12:02:14 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 12:02:14 kafka | [2024-01-23 12:00:14,274] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.617489716Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 12:02:14 policy-pap | [2024-01-23T12:00:35.067+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate stopping listener 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,274] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.625830348Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.340822ms 12:02:14 policy-pap | [2024-01-23T12:00:35.067+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate stopped 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,282] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.65860142Z level=info msg="Executing migration" id="add index builtin_role.org_id" 12:02:14 policy-pap | [2024-01-23T12:00:35.073+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate successful 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.660526695Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.926036ms 12:02:14 kafka | [2024-01-23 12:00:14,283] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-pap | [2024-01-23T12:00:35.073+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 has no more requests 12:02:14 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.66528317Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 12:02:14 kafka | [2024-01-23 12:00:14,283] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 12:02:14 policy-pap | [2024-01-23T12:00:42.732+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.666360373Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.076413ms 12:02:14 kafka | [2024-01-23 12:00:14,283] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-pap | [2024-01-23T12:00:42.740+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 12:02:14 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.671018144Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 12:02:14 kafka | [2024-01-23 12:00:14,283] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:43.125+00:00|INFO|SessionData|http-nio-6969-exec-7] unknown group testGroup 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.672494377Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.474633ms 12:02:14 kafka | [2024-01-23 12:00:14,290] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-pap | [2024-01-23T12:00:43.734+00:00|INFO|SessionData|http-nio-6969-exec-7] create cached group testGroup 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.683778515Z level=info msg="Executing migration" id="add unique index role.uid" 12:02:14 kafka | [2024-01-23 12:00:14,291] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-pap | [2024-01-23T12:00:43.734+00:00|INFO|SessionData|http-nio-6969-exec-7] creating DB group testGroup 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.684918761Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.140726ms 12:02:14 kafka | [2024-01-23 12:00:14,291] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 12:02:14 policy-pap | [2024-01-23T12:00:44.312+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 12:02:14 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.688244426Z level=info msg="Executing migration" id="create seed assignment table" 12:02:14 kafka | [2024-01-23 12:00:14,291] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-pap | [2024-01-23T12:00:44.658+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.68913016Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=887.414µs 12:02:14 kafka | [2024-01-23 12:00:14,291] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:44.770+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.694233402Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 12:02:14 kafka | [2024-01-23 12:00:14,301] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-pap | [2024-01-23T12:00:44.770+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.695468453Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.234631ms 12:02:14 kafka | [2024-01-23 12:00:14,302] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-pap | [2024-01-23T12:00:44.771+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup 12:02:14 policy-db-migrator | TRUNCATE TABLE sequence 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.698780037Z level=info msg="Executing migration" id="add column hidden to role table" 12:02:14 kafka | [2024-01-23 12:00:14,302] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 12:02:14 policy-pap | [2024-01-23T12:00:44.788+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-01-23T12:00:44Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-01-23T12:00:44Z, user=policyadmin)] 12:02:14 policy-db-migrator | -------------- 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.706871987Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.09395ms 12:02:14 kafka | [2024-01-23 12:00:14,303] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-pap | [2024-01-23T12:00:45.531+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group testGroup 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.743156292Z level=info msg="Executing migration" id="permission kind migration" 12:02:14 kafka | [2024-01-23 12:00:14,303] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-pap | [2024-01-23T12:00:45.533+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.753787458Z level=info msg="Migration successfully executed" id="permission kind migration" duration=10.632096ms 12:02:14 kafka | [2024-01-23 12:00:14,311] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-pap | [2024-01-23T12:00:45.533+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.restart.tca 1.0.0 12:02:14 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 12:02:14 kafka | [2024-01-23 12:00:14,311] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.760870539Z level=info msg="Executing migration" id="permission attribute migration" 12:02:14 policy-pap | [2024-01-23T12:00:45.533+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group testGroup 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,312] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.773747786Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=12.877127ms 12:02:14 policy-pap | [2024-01-23T12:00:45.533+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group testGroup 12:02:14 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 12:02:14 kafka | [2024-01-23 12:00:14,312] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.778187825Z level=info msg="Executing migration" id="permission identifier migration" 12:02:14 policy-pap | [2024-01-23T12:00:45.546+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-01-23T12:00:45Z, user=policyadmin)] 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,312] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.786443234Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.254419ms 12:02:14 policy-pap | [2024-01-23T12:00:45.900+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group defaultGroup 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,318] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.78960005Z level=info msg="Executing migration" id="add permission identifier index" 12:02:14 policy-pap | [2024-01-23T12:00:45.900+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,318] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.790377318Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=774.638µs 12:02:14 policy-pap | [2024-01-23T12:00:45.900+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 12:02:14 policy-db-migrator | DROP TABLE pdpstatistics 12:02:14 kafka | [2024-01-23 12:00:14,318] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.796722472Z level=info msg="Executing migration" id="create query_history table v1" 12:02:14 policy-pap | [2024-01-23T12:00:45.900+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,318] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.798231497Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.510645ms 12:02:14 policy-pap | [2024-01-23T12:00:45.900+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,319] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.802958481Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 12:02:14 policy-pap | [2024-01-23T12:00:45.900+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,334] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.805206002Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=2.246671ms 12:02:14 policy-pap | [2024-01-23T12:00:45.916+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-01-23T12:00:45Z, user=policyadmin)] 12:02:14 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 12:02:14 kafka | [2024-01-23 12:00:14,334] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.808836282Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 12:02:14 policy-pap | [2024-01-23T12:01:04.879+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c, expireMs=1706011264879] 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,335] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.808933286Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=95.415µs 12:02:14 policy-pap | [2024-01-23T12:01:04.992+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=d5747120-881d-4c2e-9c54-68eb2a8c3ec9, expireMs=1706011264991] 12:02:14 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 12:02:14 kafka | [2024-01-23 12:00:14,335] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.812528174Z level=info msg="Executing migration" id="rbac disabled migrator" 12:02:14 policy-pap | [2024-01-23T12:01:06.504+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,335] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.812597678Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=70.844µs 12:02:14 policy-pap | [2024-01-23T12:01:06.506+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,341] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.817739212Z level=info msg="Executing migration" id="teams permissions migration" 12:02:14 policy-db-migrator | 12:02:14 kafka | [2024-01-23 12:00:14,342] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.818582284Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=838.941µs 12:02:14 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 12:02:14 kafka | [2024-01-23 12:00:14,342] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.824461565Z level=info msg="Executing migration" id="dashboard permissions" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,342] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.825507116Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.047522ms 12:02:14 policy-db-migrator | DROP TABLE statistics_sequence 12:02:14 kafka | [2024-01-23 12:00:14,342] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.82900934Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 12:02:14 policy-db-migrator | -------------- 12:02:14 kafka | [2024-01-23 12:00:14,352] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.829739606Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=729.977µs 12:02:14 policy-db-migrator | 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.832893542Z level=info msg="Executing migration" id="drop managed folder create actions" 12:02:14 kafka | [2024-01-23 12:00:14,353] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-db-migrator | policyadmin: OK: upgrade (1300) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.833156255Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=262.353µs 12:02:14 kafka | [2024-01-23 12:00:14,353] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | name version 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.838616685Z level=info msg="Executing migration" id="alerting notification permissions" 12:02:14 kafka | [2024-01-23 12:00:14,353] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | policyadmin 1300 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.839172862Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=556.677µs 12:02:14 kafka | [2024-01-23 12:00:14,353] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-db-migrator | ID script operation from_version to_version tag success atTime 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.848581288Z level=info msg="Executing migration" id="create query_history_star table v1" 12:02:14 kafka | [2024-01-23 12:00:14,364] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:41 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.850023689Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.441451ms 12:02:14 kafka | [2024-01-23 12:00:14,365] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:41 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.854711981Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 12:02:14 kafka | [2024-01-23 12:00:14,365] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:41 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.85589646Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.186188ms 12:02:14 kafka | [2024-01-23 12:00:14,365] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:41 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.859156091Z level=info msg="Executing migration" id="add column org_id in query_history_star" 12:02:14 kafka | [2024-01-23 12:00:14,365] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:41 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.867290512Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.134121ms 12:02:14 kafka | [2024-01-23 12:00:14,377] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:41 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.871674799Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 12:02:14 kafka | [2024-01-23 12:00:14,377] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.871780614Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=105.455µs 12:02:14 kafka | [2024-01-23 12:00:14,378] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.876057826Z level=info msg="Executing migration" id="create correlation table v1" 12:02:14 kafka | [2024-01-23 12:00:14,378] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.877019544Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=961.027µs 12:02:14 kafka | [2024-01-23 12:00:14,378] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.882273403Z level=info msg="Executing migration" id="add index correlations.uid" 12:02:14 kafka | [2024-01-23 12:00:14,386] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.883986108Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.714015ms 12:02:14 kafka | [2024-01-23 12:00:14,386] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.893508459Z level=info msg="Executing migration" id="add index correlations.source_uid" 12:02:14 kafka | [2024-01-23 12:00:14,387] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.894797973Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.290954ms 12:02:14 kafka | [2024-01-23 12:00:14,387] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.89817331Z level=info msg="Executing migration" id="add correlation config column" 12:02:14 kafka | [2024-01-23 12:00:14,387] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.907759824Z level=info msg="Migration successfully executed" id="add correlation config column" duration=9.585774ms 12:02:14 kafka | [2024-01-23 12:00:14,395] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.910984494Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 12:02:14 kafka | [2024-01-23 12:00:14,395] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.912559432Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.573657ms 12:02:14 kafka | [2024-01-23 12:00:14,395] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.921033881Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 12:02:14 kafka | [2024-01-23 12:00:14,396] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.923452211Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=2.419899ms 12:02:14 kafka | [2024-01-23 12:00:14,396] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.927388295Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 12:02:14 kafka | [2024-01-23 12:00:14,405] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.958065603Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=30.673908ms 12:02:14 kafka | [2024-01-23 12:00:14,406] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.962006098Z level=info msg="Executing migration" id="create correlation v2" 12:02:14 kafka | [2024-01-23 12:00:14,406] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.962748304Z level=info msg="Migration successfully executed" id="create correlation v2" duration=741.466µs 12:02:14 kafka | [2024-01-23 12:00:14,407] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.967007685Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 12:02:14 kafka | [2024-01-23 12:00:14,407] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.968888908Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.877863ms 12:02:14 kafka | [2024-01-23 12:00:14,415] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.974082155Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 12:02:14 kafka | [2024-01-23 12:00:14,416] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.975578459Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.496894ms 12:02:14 kafka | [2024-01-23 12:00:14,416] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.98023998Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 12:02:14 kafka | [2024-01-23 12:00:14,416] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.981421738Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.181588ms 12:02:14 kafka | [2024-01-23 12:00:14,417] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.984782194Z level=info msg="Executing migration" id="copy correlation v1 to v2" 12:02:14 kafka | [2024-01-23 12:00:14,424] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.984992565Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=210.481µs 12:02:14 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 12:02:14 kafka | [2024-01-23 12:00:14,425] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.996763347Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 12:02:14 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 12:02:14 kafka | [2024-01-23 12:00:14,425] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:43.997866592Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.108075ms 12:02:14 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 12:02:14 kafka | [2024-01-23 12:00:14,425] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.002327462Z level=info msg="Executing migration" id="add provisioning column" 12:02:14 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 12:02:14 kafka | [2024-01-23 12:00:14,425] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.010804041Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.475929ms 12:02:14 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 12:02:14 kafka | [2024-01-23 12:00:14,431] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.015603008Z level=info msg="Executing migration" id="create entity_events table" 12:02:14 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 12:02:14 kafka | [2024-01-23 12:00:14,433] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.017074551Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.470792ms 12:02:14 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 12:02:14 kafka | [2024-01-23 12:00:14,433] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.021897689Z level=info msg="Executing migration" id="create dashboard public config v1" 12:02:14 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 12:02:14 kafka | [2024-01-23 12:00:14,433] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.023731519Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.834491ms 12:02:14 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 12:02:14 kafka | [2024-01-23 12:00:14,433] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.03105411Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 12:02:14 kafka | [2024-01-23 12:00:14,439] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.031824738Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 12:02:14 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 12:02:14 kafka | [2024-01-23 12:00:14,440] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.036168783Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 12:02:14 kafka | [2024-01-23 12:00:14,440] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.036671548Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 12:02:14 kafka | [2024-01-23 12:00:14,440] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.078575746Z level=info msg="Executing migration" id="Drop old dashboard public config table" 12:02:14 kafka | [2024-01-23 12:00:14,440] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(UZhXoIGVRReKBLH6iRv9pA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.079851599Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.277894ms 12:02:14 kafka | [2024-01-23 12:00:14,449] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.085295347Z level=info msg="Executing migration" id="recreate dashboard public config v1" 12:02:14 kafka | [2024-01-23 12:00:14,449] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.086539869Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.244002ms 12:02:14 kafka | [2024-01-23 12:00:14,449] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.092094363Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 12:02:14 kafka | [2024-01-23 12:00:14,450] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.093222498Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.128185ms 12:02:14 kafka | [2024-01-23 12:00:14,450] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.097388284Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 12:02:14 kafka | [2024-01-23 12:00:14,459] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.09852153Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.130766ms 12:02:14 kafka | [2024-01-23 12:00:14,459] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.101641364Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 12:02:14 kafka | [2024-01-23 12:00:14,459] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.102729428Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.088584ms 12:02:14 kafka | [2024-01-23 12:00:14,460] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.106050802Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 12:02:14 kafka | [2024-01-23 12:00:14,460] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.107153726Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.103045ms 12:02:14 kafka | [2024-01-23 12:00:14,467] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.111367874Z level=info msg="Executing migration" id="Drop public config table" 12:02:14 kafka | [2024-01-23 12:00:14,467] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.112235917Z level=info msg="Migration successfully executed" id="Drop public config table" duration=865.423µs 12:02:14 kafka | [2024-01-23 12:00:14,468] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.115605983Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 12:02:14 kafka | [2024-01-23 12:00:14,468] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.116660625Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.052762ms 12:02:14 kafka | [2024-01-23 12:00:14,468] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.120573608Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 12:02:14 kafka | [2024-01-23 12:00:14,505] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.121746446Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.172858ms 12:02:14 kafka | [2024-01-23 12:00:14,506] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.19558167Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 12:02:14 kafka | [2024-01-23 12:00:14,506] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.19741213Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.83036ms 12:02:14 kafka | [2024-01-23 12:00:14,506] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.202608787Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 12:02:14 kafka | [2024-01-23 12:00:14,507] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.204110181Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.501984ms 12:02:14 kafka | [2024-01-23 12:00:14,516] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.208520398Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 12:02:14 kafka | [2024-01-23 12:00:14,517] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.240954949Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=32.430331ms 12:02:14 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 12:02:14 kafka | [2024-01-23 12:00:14,517] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.244845791Z level=info msg="Executing migration" id="add annotations_enabled column" 12:02:14 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 12:02:14 kafka | [2024-01-23 12:00:14,517] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.253288548Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.442277ms 12:02:14 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 12:02:14 kafka | [2024-01-23 12:00:14,517] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.25617954Z level=info msg="Executing migration" id="add time_selection_enabled column" 12:02:14 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 12:02:14 kafka | [2024-01-23 12:00:14,526] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.262429289Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.250309ms 12:02:14 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 12:02:14 kafka | [2024-01-23 12:00:14,527] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.266576124Z level=info msg="Executing migration" id="delete orphaned public dashboards" 12:02:14 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 12:02:14 kafka | [2024-01-23 12:00:14,527] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.266894839Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=306.396µs 12:02:14 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 12:02:14 kafka | [2024-01-23 12:00:14,527] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.269434285Z level=info msg="Executing migration" id="add share column" 12:02:14 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 12:02:14 kafka | [2024-01-23 12:00:14,527] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.2780537Z level=info msg="Migration successfully executed" id="add share column" duration=8.616826ms 12:02:14 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 12:02:14 kafka | [2024-01-23 12:00:14,534] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.282874958Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 12:02:14 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 12:02:14 kafka | [2024-01-23 12:00:14,534] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.283073808Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=198.37µs 12:02:14 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 12:02:14 kafka | [2024-01-23 12:00:14,535] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 12:02:14 kafka | [2024-01-23 12:00:14,535] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.286517198Z level=info msg="Executing migration" id="create file table" 12:02:14 kafka | [2024-01-23 12:00:14,535] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.28717146Z level=info msg="Migration successfully executed" id="create file table" duration=653.822µs 12:02:14 kafka | [2024-01-23 12:00:14,542] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.291167997Z level=info msg="Executing migration" id="file table idx: path natural pk" 12:02:14 kafka | [2024-01-23 12:00:14,542] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.292292963Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.124746ms 12:02:14 kafka | [2024-01-23 12:00:14,542] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.295621377Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 12:02:14 kafka | [2024-01-23 12:00:14,542] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.296749553Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.128235ms 12:02:14 kafka | [2024-01-23 12:00:14,542] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.30013907Z level=info msg="Executing migration" id="create file_meta table" 12:02:14 kafka | [2024-01-23 12:00:14,550] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.30095919Z level=info msg="Migration successfully executed" id="create file_meta table" duration=819.73µs 12:02:14 kafka | [2024-01-23 12:00:14,551] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.305329006Z level=info msg="Executing migration" id="file table idx: path key" 12:02:14 kafka | [2024-01-23 12:00:14,551] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.307297433Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.966307ms 12:02:14 kafka | [2024-01-23 12:00:14,551] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.315284517Z level=info msg="Executing migration" id="set path collation in file table" 12:02:14 kafka | [2024-01-23 12:00:14,551] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:46 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.315442705Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=163.058µs 12:02:14 kafka | [2024-01-23 12:00:14,556] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:46 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.325439618Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 12:02:14 kafka | [2024-01-23 12:00:14,557] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:46 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.325570375Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=131.507µs 12:02:14 kafka | [2024-01-23 12:00:14,557] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:46 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.329278578Z level=info msg="Executing migration" id="managed permissions migration" 12:02:14 kafka | [2024-01-23 12:00:14,557] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:46 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.330145801Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=898.755µs 12:02:14 kafka | [2024-01-23 12:00:14,557] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:46 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.334682685Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 12:02:14 kafka | [2024-01-23 12:00:14,563] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:46 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.335039022Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=354.168µs 12:02:14 kafka | [2024-01-23 12:00:14,563] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:46 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.337881332Z level=info msg="Executing migration" id="RBAC action name migrator" 12:02:14 kafka | [2024-01-23 12:00:14,564] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:46 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.339085662Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.20398ms 12:02:14 kafka | [2024-01-23 12:00:14,564] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:46 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.34228496Z level=info msg="Executing migration" id="Add UID column to playlist" 12:02:14 kafka | [2024-01-23 12:00:14,564] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:46 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.354616168Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=12.331708ms 12:02:14 kafka | [2024-01-23 12:00:14,574] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:46 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.360009574Z level=info msg="Executing migration" id="Update uid column values in playlist" 12:02:14 kafka | [2024-01-23 12:00:14,574] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:46 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.360171232Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=161.798µs 12:02:14 kafka | [2024-01-23 12:00:14,574] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:46 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.369618869Z level=info msg="Executing migration" id="Add index for uid in playlist" 12:02:14 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:46 12:02:14 kafka | [2024-01-23 12:00:14,574] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.371364915Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.745066ms 12:02:14 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:46 12:02:14 kafka | [2024-01-23 12:00:14,574] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.375049567Z level=info msg="Executing migration" id="update group index for alert rules" 12:02:14 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:47 12:02:14 kafka | [2024-01-23 12:00:14,583] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.375637676Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=589.429µs 12:02:14 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:47 12:02:14 kafka | [2024-01-23 12:00:14,584] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.379049674Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 12:02:14 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:47 12:02:14 kafka | [2024-01-23 12:00:14,584] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.379256554Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=206.99µs 12:02:14 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:47 12:02:14 kafka | [2024-01-23 12:00:14,584] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.383670242Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 12:02:14 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:47 12:02:14 kafka | [2024-01-23 12:00:14,584] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.384119684Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=449.422µs 12:02:14 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:47 12:02:14 kafka | [2024-01-23 12:00:14,592] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.387733813Z level=info msg="Executing migration" id="add action column to seed_assignment" 12:02:14 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:47 12:02:14 kafka | [2024-01-23 12:00:14,593] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.399282493Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=11.549119ms 12:02:14 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:47 12:02:14 kafka | [2024-01-23 12:00:14,593] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.40834738Z level=info msg="Executing migration" id="add scope column to seed_assignment" 12:02:14 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2301241159411000u 1 2024-01-23 11:59:47 12:02:14 kafka | [2024-01-23 12:00:14,593] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.417032649Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.683698ms 12:02:14 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2301241159411000u 1 2024-01-23 11:59:47 12:02:14 kafka | [2024-01-23 12:00:14,593] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.420895899Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 12:02:14 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2301241159411000u 1 2024-01-23 11:59:47 12:02:14 kafka | [2024-01-23 12:00:14,598] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.422752951Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.856582ms 12:02:14 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2301241159411000u 1 2024-01-23 11:59:47 12:02:14 kafka | [2024-01-23 12:00:14,599] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.428966428Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 12:02:14 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2301241159411000u 1 2024-01-23 11:59:47 12:02:14 kafka | [2024-01-23 12:00:14,599] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.536781468Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=107.80032ms 12:02:14 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2301241159411000u 1 2024-01-23 11:59:47 12:02:14 kafka | [2024-01-23 12:00:14,599] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.540220678Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 12:02:14 kafka | [2024-01-23 12:00:14,599] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2301241159411000u 1 2024-01-23 11:59:47 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.541427037Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.160937ms 12:02:14 kafka | [2024-01-23 12:00:14,605] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:02:14 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2301241159411000u 1 2024-01-23 11:59:47 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.545267047Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 12:02:14 kafka | [2024-01-23 12:00:14,605] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:02:14 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2301241159411000u 1 2024-01-23 11:59:47 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.546387632Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.142066ms 12:02:14 kafka | [2024-01-23 12:00:14,605] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2301241159411100u 1 2024-01-23 11:59:47 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.552739596Z level=info msg="Executing migration" id="add primary key to seed_assigment" 12:02:14 kafka | [2024-01-23 12:00:14,606] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 12:02:14 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2301241159411200u 1 2024-01-23 11:59:47 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.592686147Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=39.920471ms 12:02:14 kafka | [2024-01-23 12:00:14,606] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 12:02:14 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2301241159411200u 1 2024-01-23 11:59:48 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.628762117Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 12:02:14 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2301241159411200u 1 2024-01-23 11:59:48 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.629192739Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=428.671µs 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 12:02:14 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2301241159411200u 1 2024-01-23 11:59:48 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.633111552Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 12:02:14 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2301241159411300u 1 2024-01-23 11:59:48 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.633976045Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=864.733µs 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 12:02:14 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2301241159411300u 1 2024-01-23 11:59:48 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 12:02:14 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2301241159411300u 1 2024-01-23 11:59:48 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.637368722Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 12:02:14 policy-db-migrator | policyadmin: OK @ 1300 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.637625745Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=256.623µs 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.642950778Z level=info msg="Executing migration" id="create folder table" 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.645323865Z level=info msg="Migration successfully executed" id="create folder table" duration=2.372848ms 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.651961452Z level=info msg="Executing migration" id="Add index for parent_uid" 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.65313508Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.173808ms 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.65799863Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.659319736Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.321035ms 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.664355374Z level=info msg="Executing migration" id="Update folder title length" 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.664382525Z level=info msg="Migration successfully executed" id="Update folder title length" duration=26.131µs 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.66813106Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.670000213Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.867432ms 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.673417391Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.675302794Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.881833ms 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.679458449Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.680851068Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.391609ms 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.689708665Z level=info msg="Executing migration" id="create anon_device table" 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.690939256Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.233571ms 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.697623476Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.700087367Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.460851ms 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.707741495Z level=info msg="Executing migration" id="add index anon_device.updated_at" 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.709474401Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.733935ms 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.714959031Z level=info msg="Executing migration" id="create signing_key table" 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.71615033Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.191749ms 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.72385084Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.725743774Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.897183ms 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.733040764Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.734543968Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.505865ms 12:02:14 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.744813275Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 12:02:14 kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.746587572Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=1.785398ms 12:02:14 kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.751703145Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 12:02:14 kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.76276072Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=11.055156ms 12:02:14 kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.767791789Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 12:02:14 kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.768524925Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=737.547µs 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.7716728Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 12:02:14 kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.772947023Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.273403ms 12:02:14 kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.776150681Z level=info msg="Executing migration" id="create sso_setting table" 12:02:14 kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.777083587Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=935.386µs 12:02:14 kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.787036748Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 12:02:14 kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.78787807Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=842.322µs 12:02:14 kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.793225184Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 12:02:14 kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.793802602Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=573.618µs 12:02:14 kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 12:02:14 grafana | logger=migrator t=2024-01-23T11:59:44.798397109Z level=info msg="migrations completed" performed=523 skipped=0 duration=5.463601602s 12:02:14 kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 12:02:14 grafana | logger=sqlstore t=2024-01-23T11:59:44.807756041Z level=info msg="Created default admin" user=admin 12:02:14 kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 12:02:14 grafana | logger=sqlstore t=2024-01-23T11:59:44.808056686Z level=info msg="Created default organization" 12:02:14 kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 12:02:14 grafana | logger=secrets t=2024-01-23T11:59:44.81401Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 12:02:14 kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 12:02:14 grafana | logger=plugin.store t=2024-01-23T11:59:44.832469861Z level=info msg="Loading plugins..." 12:02:14 kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 12:02:14 grafana | logger=local.finder t=2024-01-23T11:59:44.870777781Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 12:02:14 grafana | logger=plugin.store t=2024-01-23T11:59:44.870832664Z level=info msg="Plugins loaded" count=55 duration=38.363754ms 12:02:14 grafana | logger=query_data t=2024-01-23T11:59:44.874434971Z level=info msg="Query Service initialization" 12:02:14 kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 12:02:14 grafana | logger=live.push_http t=2024-01-23T11:59:44.878640979Z level=info msg="Live Push Gateway initialization" 12:02:14 kafka | [2024-01-23 12:00:14,624] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 grafana | logger=ngalert.migration t=2024-01-23T11:59:44.885028514Z level=info msg=Starting 12:02:14 kafka | [2024-01-23 12:00:14,630] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 grafana | logger=ngalert.migration orgID=1 t=2024-01-23T11:59:44.886031414Z level=info msg="Migrating alerts for organisation" 12:02:14 kafka | [2024-01-23 12:00:14,637] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 grafana | logger=ngalert.migration orgID=1 t=2024-01-23T11:59:44.886463905Z level=info msg="Alerts found to migrate" alerts=0 12:02:14 kafka | [2024-01-23 12:00:14,637] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 grafana | logger=ngalert.migration orgID=1 t=2024-01-23T11:59:44.886978841Z level=warn msg="No available receivers" 12:02:14 kafka | [2024-01-23 12:00:14,637] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 grafana | logger=ngalert.migration CurrentType=Legacy DesiredType=UnifiedAlerting CleanOnDowngrade=false CleanOnUpgrade=false t=2024-01-23T11:59:44.890128006Z level=info msg="Completed legacy migration" 12:02:14 kafka | [2024-01-23 12:00:14,637] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 grafana | logger=infra.usagestats.collector t=2024-01-23T11:59:44.947028464Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 12:02:14 kafka | [2024-01-23 12:00:14,637] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 grafana | logger=provisioning.datasources t=2024-01-23T11:59:44.948907717Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 12:02:14 kafka | [2024-01-23 12:00:14,638] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 grafana | logger=provisioning.alerting t=2024-01-23T11:59:44.962221554Z level=info msg="starting to provision alerting" 12:02:14 kafka | [2024-01-23 12:00:14,638] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 grafana | logger=provisioning.alerting t=2024-01-23T11:59:44.962237175Z level=info msg="finished to provision alerting" 12:02:14 kafka | [2024-01-23 12:00:14,638] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 grafana | logger=grafanaStorageLogger t=2024-01-23T11:59:44.962701597Z level=info msg="Storage starting" 12:02:14 kafka | [2024-01-23 12:00:14,638] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 grafana | logger=ngalert.state.manager t=2024-01-23T11:59:44.963542479Z level=info msg="Warming state cache for startup" 12:02:14 kafka | [2024-01-23 12:00:14,638] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 grafana | logger=http.server t=2024-01-23T11:59:44.965235623Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 12:02:14 kafka | [2024-01-23 12:00:14,638] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 grafana | logger=ngalert.multiorg.alertmanager t=2024-01-23T11:59:44.965418892Z level=info msg="Starting MultiOrg Alertmanager" 12:02:14 kafka | [2024-01-23 12:00:14,638] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 grafana | logger=sqlstore.transactions t=2024-01-23T11:59:44.976425275Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 12:02:14 kafka | [2024-01-23 12:00:14,638] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 grafana | logger=grafana.update.checker t=2024-01-23T11:59:44.990726761Z level=info msg="Update check succeeded" duration=27.374421ms 12:02:14 kafka | [2024-01-23 12:00:14,638] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 grafana | logger=ngalert.state.manager t=2024-01-23T11:59:44.994755789Z level=info msg="State cache has been initialized" states=0 duration=31.2119ms 12:02:14 kafka | [2024-01-23 12:00:14,638] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 grafana | logger=ngalert.scheduler t=2024-01-23T11:59:44.99477778Z level=info msg="Starting scheduler" tickInterval=10s 12:02:14 kafka | [2024-01-23 12:00:14,638] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 grafana | logger=ticker t=2024-01-23T11:59:44.994818252Z level=info msg=starting first_tick=2024-01-23T11:59:50Z 12:02:14 kafka | [2024-01-23 12:00:14,639] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 grafana | logger=plugins.update.checker t=2024-01-23T11:59:45.037990913Z level=info msg="Update check succeeded" duration=75.144669ms 12:02:14 kafka | [2024-01-23 12:00:14,639] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 grafana | logger=sqlstore.transactions t=2024-01-23T11:59:45.094321363Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 12:02:14 kafka | [2024-01-23 12:00:14,639] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 7 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 grafana | logger=infra.usagestats t=2024-01-23T12:00:59.975838576Z level=info msg="Usage stats are ready to report" 12:02:14 kafka | [2024-01-23 12:00:14,640] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 5 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,643] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,644] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,644] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,644] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,644] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,644] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,644] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,644] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,644] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 5 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,644] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,644] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,644] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,644] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,644] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,644] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,645] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,645] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,645] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,645] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,646] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 4 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,646] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,646] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,646] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,646] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,646] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,647] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,647] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,647] INFO [Broker id=1] Finished LeaderAndIsr request in 714ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,648] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,648] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,649] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,649] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,649] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,649] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,650] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,650] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,650] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,650] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,650] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,650] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,651] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,651] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,651] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,651] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,651] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,651] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,652] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=y4LhsVCjShWp08qTM9318g, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=UZhXoIGVRReKBLH6iRv9pA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,652] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,652] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,652] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,652] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,652] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,653] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,653] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,653] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,663] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,666] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,666] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,666] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,666] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,666] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,667] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,668] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 24 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,668] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 12:02:14 kafka | [2024-01-23 12:00:14,668] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,668] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:02:14 kafka | [2024-01-23 12:00:14,747] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-05125a59-907a-47c3-93e1-a990571b604b and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,751] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 7faaa365-1216-4c85-9c2d-e9bca189fc3d in Empty state. Created a new member id consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3-c639ff7a-2705-4b68-b804-62b68552537a and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,771] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-05125a59-907a-47c3-93e1-a990571b604b with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:14,774] INFO [GroupCoordinator 1]: Preparing to rebalance group 7faaa365-1216-4c85-9c2d-e9bca189fc3d in state PreparingRebalance with old generation 0 (__consumer_offsets-46) (reason: Adding new member consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3-c639ff7a-2705-4b68-b804-62b68552537a with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:15,108] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 5e219e28-7118-417e-b91d-edf2321c7473 in Empty state. Created a new member id consumer-5e219e28-7118-417e-b91d-edf2321c7473-2-d01c9040-7f78-415c-8a67-4b73bfd12a93 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:15,113] INFO [GroupCoordinator 1]: Preparing to rebalance group 5e219e28-7118-417e-b91d-edf2321c7473 in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-5e219e28-7118-417e-b91d-edf2321c7473-2-d01c9040-7f78-415c-8a67-4b73bfd12a93 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:17,781] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:17,785] INFO [GroupCoordinator 1]: Stabilized group 7faaa365-1216-4c85-9c2d-e9bca189fc3d generation 1 (__consumer_offsets-46) with 1 members (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:17,815] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-05125a59-907a-47c3-93e1-a990571b604b for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:17,817] INFO [GroupCoordinator 1]: Assignment received from leader consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3-c639ff7a-2705-4b68-b804-62b68552537a for group 7faaa365-1216-4c85-9c2d-e9bca189fc3d for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:18,114] INFO [GroupCoordinator 1]: Stabilized group 5e219e28-7118-417e-b91d-edf2321c7473 generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 12:02:14 kafka | [2024-01-23 12:00:18,131] INFO [GroupCoordinator 1]: Assignment received from leader consumer-5e219e28-7118-417e-b91d-edf2321c7473-2-d01c9040-7f78-415c-8a67-4b73bfd12a93 for group 5e219e28-7118-417e-b91d-edf2321c7473 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 12:02:14 ++ echo 'Tearing down containers...' 12:02:14 Tearing down containers... 12:02:14 ++ docker-compose down -v --remove-orphans 12:02:14 Stopping policy-apex-pdp ... 12:02:14 Stopping policy-pap ... 12:02:14 Stopping kafka ... 12:02:14 Stopping grafana ... 12:02:14 Stopping policy-api ... 12:02:14 Stopping prometheus ... 12:02:14 Stopping compose_zookeeper_1 ... 12:02:14 Stopping mariadb ... 12:02:14 Stopping simulator ... 12:02:15 Stopping grafana ... done 12:02:15 Stopping prometheus ... done 12:02:24 Stopping policy-apex-pdp ... done 12:02:35 Stopping policy-pap ... done 12:02:35 Stopping simulator ... done 12:02:36 Stopping mariadb ... done 12:02:36 Stopping kafka ... done 12:02:37 Stopping compose_zookeeper_1 ... done 12:02:45 Stopping policy-api ... done 12:02:45 Removing policy-apex-pdp ... 12:02:45 Removing policy-pap ... 12:02:45 Removing kafka ... 12:02:45 Removing grafana ... 12:02:45 Removing policy-api ... 12:02:45 Removing policy-db-migrator ... 12:02:45 Removing prometheus ... 12:02:45 Removing compose_zookeeper_1 ... 12:02:45 Removing mariadb ... 12:02:45 Removing simulator ... 12:02:45 Removing simulator ... done 12:02:45 Removing policy-apex-pdp ... done 12:02:45 Removing kafka ... done 12:02:45 Removing compose_zookeeper_1 ... done 12:02:45 Removing policy-pap ... done 12:02:45 Removing grafana ... done 12:02:45 Removing policy-db-migrator ... done 12:02:45 Removing policy-api ... done 12:02:45 Removing mariadb ... done 12:02:45 Removing prometheus ... done 12:02:45 Removing network compose_default 12:02:46 ++ cd /w/workspace/policy-pap-master-project-csit-pap 12:02:46 + load_set 12:02:46 + _setopts=hxB 12:02:46 ++ echo braceexpand:hashall:interactive-comments:xtrace 12:02:46 ++ tr : ' ' 12:02:46 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:02:46 + set +o braceexpand 12:02:46 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:02:46 + set +o hashall 12:02:46 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:02:46 + set +o interactive-comments 12:02:46 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:02:46 + set +o xtrace 12:02:46 ++ echo hxB 12:02:46 ++ sed 's/./& /g' 12:02:46 + for i in $(echo "$_setopts" | sed 's/./& /g') 12:02:46 + set +h 12:02:46 + for i in $(echo "$_setopts" | sed 's/./& /g') 12:02:46 + set +x 12:02:46 + [[ -n /tmp/tmp.T4ASB2z6Jw ]] 12:02:46 + rsync -av /tmp/tmp.T4ASB2z6Jw/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 12:02:46 sending incremental file list 12:02:46 ./ 12:02:46 log.html 12:02:46 output.xml 12:02:46 report.html 12:02:46 testplan.txt 12:02:46 12:02:46 sent 911,149 bytes received 95 bytes 1,822,488.00 bytes/sec 12:02:46 total size is 910,607 speedup is 1.00 12:02:46 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 12:02:46 + exit 0 12:02:46 $ ssh-agent -k 12:02:46 unset SSH_AUTH_SOCK; 12:02:46 unset SSH_AGENT_PID; 12:02:46 echo Agent pid 2123 killed; 12:02:46 [ssh-agent] Stopped. 12:02:46 Robot results publisher started... 12:02:46 -Parsing output xml: 12:02:46 Done! 12:02:46 WARNING! Could not find file: **/log.html 12:02:46 WARNING! Could not find file: **/report.html 12:02:46 -Copying log files to build dir: 12:02:47 Done! 12:02:47 -Assigning results to build: 12:02:47 Done! 12:02:47 -Checking thresholds: 12:02:47 Done! 12:02:47 Done publishing Robot results. 12:02:47 [PostBuildScript] - [INFO] Executing post build scripts. 12:02:47 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins3195400356898807782.sh 12:02:47 ---> sysstat.sh 12:02:47 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12437479620954204043.sh 12:02:47 ---> package-listing.sh 12:02:47 ++ tr '[:upper:]' '[:lower:]' 12:02:47 ++ facter osfamily 12:02:47 + OS_FAMILY=debian 12:02:47 + workspace=/w/workspace/policy-pap-master-project-csit-pap 12:02:47 + START_PACKAGES=/tmp/packages_start.txt 12:02:47 + END_PACKAGES=/tmp/packages_end.txt 12:02:47 + DIFF_PACKAGES=/tmp/packages_diff.txt 12:02:47 + PACKAGES=/tmp/packages_start.txt 12:02:47 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 12:02:47 + PACKAGES=/tmp/packages_end.txt 12:02:47 + case "${OS_FAMILY}" in 12:02:47 + dpkg -l 12:02:47 + grep '^ii' 12:02:47 + '[' -f /tmp/packages_start.txt ']' 12:02:47 + '[' -f /tmp/packages_end.txt ']' 12:02:47 + diff /tmp/packages_start.txt /tmp/packages_end.txt 12:02:47 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 12:02:47 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 12:02:47 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 12:02:47 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins9208720287268946047.sh 12:02:47 ---> capture-instance-metadata.sh 12:02:47 Setup pyenv: 12:02:47 system 12:02:47 3.8.13 12:02:47 3.9.13 12:02:47 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 12:02:48 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-dyeB from file:/tmp/.os_lf_venv 12:02:50 lf-activate-venv(): INFO: Installing: lftools 12:03:01 lf-activate-venv(): INFO: Adding /tmp/venv-dyeB/bin to PATH 12:03:01 INFO: Running in OpenStack, capturing instance metadata 12:03:01 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17293255453978466800.sh 12:03:01 provisioning config files... 12:03:01 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config17626998686283198592tmp 12:03:01 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 12:03:01 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 12:03:01 [EnvInject] - Injecting environment variables from a build step. 12:03:01 [EnvInject] - Injecting as environment variables the properties content 12:03:01 SERVER_ID=logs 12:03:01 12:03:01 [EnvInject] - Variables injected successfully. 12:03:01 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins497474894402880750.sh 12:03:01 ---> create-netrc.sh 12:03:01 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins3876169952220041563.sh 12:03:01 ---> python-tools-install.sh 12:03:01 Setup pyenv: 12:03:01 system 12:03:01 3.8.13 12:03:01 3.9.13 12:03:01 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 12:03:01 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-dyeB from file:/tmp/.os_lf_venv 12:03:04 lf-activate-venv(): INFO: Installing: lftools 12:03:17 lf-activate-venv(): INFO: Adding /tmp/venv-dyeB/bin to PATH 12:03:17 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins13900155818246817967.sh 12:03:17 ---> sudo-logs.sh 12:03:17 Archiving 'sudo' log.. 12:03:17 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12452643335119819247.sh 12:03:17 ---> job-cost.sh 12:03:17 Setup pyenv: 12:03:17 system 12:03:17 3.8.13 12:03:17 3.9.13 12:03:17 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 12:03:17 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-dyeB from file:/tmp/.os_lf_venv 12:03:19 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 12:03:26 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 12:03:26 lftools 0.37.8 requires openstacksdk<1.5.0, but you have openstacksdk 2.1.0 which is incompatible. 12:03:26 lf-activate-venv(): INFO: Adding /tmp/venv-dyeB/bin to PATH 12:03:26 INFO: No Stack... 12:03:26 INFO: Retrieving Pricing Info for: v3-standard-8 12:03:27 INFO: Archiving Costs 12:03:27 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins6823146667807296703.sh 12:03:27 ---> logs-deploy.sh 12:03:27 Setup pyenv: 12:03:27 system 12:03:27 3.8.13 12:03:27 3.9.13 12:03:27 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 12:03:27 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-dyeB from file:/tmp/.os_lf_venv 12:03:28 lf-activate-venv(): INFO: Installing: lftools 12:03:37 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 12:03:37 python-openstackclient 6.4.0 requires openstacksdk>=2.0.0, but you have openstacksdk 1.4.0 which is incompatible. 12:03:38 lf-activate-venv(): INFO: Adding /tmp/venv-dyeB/bin to PATH 12:03:38 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1547 12:03:38 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 12:03:39 Archives upload complete. 12:03:39 INFO: archiving logs to Nexus 12:03:40 ---> uname -a: 12:03:40 Linux prd-ubuntu1804-docker-8c-8g-14552 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 12:03:40 12:03:40 12:03:40 ---> lscpu: 12:03:40 Architecture: x86_64 12:03:40 CPU op-mode(s): 32-bit, 64-bit 12:03:40 Byte Order: Little Endian 12:03:40 CPU(s): 8 12:03:40 On-line CPU(s) list: 0-7 12:03:40 Thread(s) per core: 1 12:03:40 Core(s) per socket: 1 12:03:40 Socket(s): 8 12:03:40 NUMA node(s): 1 12:03:40 Vendor ID: AuthenticAMD 12:03:40 CPU family: 23 12:03:40 Model: 49 12:03:40 Model name: AMD EPYC-Rome Processor 12:03:40 Stepping: 0 12:03:40 CPU MHz: 2800.000 12:03:40 BogoMIPS: 5600.00 12:03:40 Virtualization: AMD-V 12:03:40 Hypervisor vendor: KVM 12:03:40 Virtualization type: full 12:03:40 L1d cache: 32K 12:03:40 L1i cache: 32K 12:03:40 L2 cache: 512K 12:03:40 L3 cache: 16384K 12:03:40 NUMA node0 CPU(s): 0-7 12:03:40 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 12:03:40 12:03:40 12:03:40 ---> nproc: 12:03:40 8 12:03:40 12:03:40 12:03:40 ---> df -h: 12:03:40 Filesystem Size Used Avail Use% Mounted on 12:03:40 udev 16G 0 16G 0% /dev 12:03:40 tmpfs 3.2G 708K 3.2G 1% /run 12:03:40 /dev/vda1 155G 15G 141G 10% / 12:03:40 tmpfs 16G 0 16G 0% /dev/shm 12:03:40 tmpfs 5.0M 0 5.0M 0% /run/lock 12:03:40 tmpfs 16G 0 16G 0% /sys/fs/cgroup 12:03:40 /dev/vda15 105M 4.4M 100M 5% /boot/efi 12:03:40 tmpfs 3.2G 0 3.2G 0% /run/user/1001 12:03:40 12:03:40 12:03:40 ---> free -m: 12:03:40 total used free shared buff/cache available 12:03:40 Mem: 32167 846 24634 0 6686 30865 12:03:40 Swap: 1023 0 1023 12:03:40 12:03:40 12:03:40 ---> ip addr: 12:03:40 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 12:03:40 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 12:03:40 inet 127.0.0.1/8 scope host lo 12:03:40 valid_lft forever preferred_lft forever 12:03:40 inet6 ::1/128 scope host 12:03:40 valid_lft forever preferred_lft forever 12:03:40 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 12:03:40 link/ether fa:16:3e:05:bb:b6 brd ff:ff:ff:ff:ff:ff 12:03:40 inet 10.30.106.120/23 brd 10.30.107.255 scope global dynamic ens3 12:03:40 valid_lft 85891sec preferred_lft 85891sec 12:03:40 inet6 fe80::f816:3eff:fe05:bbb6/64 scope link 12:03:40 valid_lft forever preferred_lft forever 12:03:40 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 12:03:40 link/ether 02:42:eb:4e:44:b0 brd ff:ff:ff:ff:ff:ff 12:03:40 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 12:03:40 valid_lft forever preferred_lft forever 12:03:40 12:03:40 12:03:40 ---> sar -b -r -n DEV: 12:03:40 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-14552) 01/23/24 _x86_64_ (8 CPU) 12:03:40 12:03:40 11:55:14 LINUX RESTART (8 CPU) 12:03:40 12:03:40 11:56:01 tps rtps wtps bread/s bwrtn/s 12:03:40 11:57:01 97.02 17.71 79.30 1021.43 25416.43 12:03:40 11:58:01 120.20 22.88 97.32 2757.14 29077.82 12:03:40 11:59:01 164.02 0.28 163.74 31.86 89267.92 12:03:40 12:00:01 395.80 11.65 384.16 776.37 98886.80 12:03:40 12:01:01 30.03 0.70 29.33 40.93 22584.49 12:03:40 12:02:01 15.98 0.00 15.98 0.00 19299.32 12:03:40 12:03:01 68.42 1.05 67.37 69.06 21836.74 12:03:40 Average: 127.36 7.75 119.61 670.97 43768.39 12:03:40 12:03:40 11:56:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 12:03:40 11:57:01 30075356 31684164 2863864 8.69 69240 1849444 1446032 4.25 884876 1684892 173068 12:03:40 11:58:01 29507212 31691856 3432008 10.42 89680 2383952 1547808 4.55 963692 2131520 352692 12:03:40 11:59:01 26726292 31668408 6212928 18.86 133720 4974528 1409496 4.15 1012044 4711732 865044 12:03:40 12:00:01 23787924 30330944 9151296 27.78 157552 6478748 7775868 22.88 2488744 6046624 356 12:03:40 12:01:01 23083132 29632912 9856088 29.92 158888 6481664 8763248 25.78 3235272 5997640 284 12:03:40 12:02:01 23064804 29615224 9874416 29.98 159056 6481980 8763364 25.78 3252476 5997292 360 12:03:40 12:03:01 25297312 31668812 7641908 23.20 161484 6319272 1489416 4.38 1225072 5855864 44596 12:03:40 Average: 25934576 30898903 7004644 21.27 132803 4995655 4456462 13.11 1866025 4632223 205200 12:03:40 12:03:40 11:56:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 12:03:40 11:57:01 lo 1.33 1.33 0.14 0.14 0.00 0.00 0.00 0.00 12:03:40 11:57:01 ens3 73.97 48.54 1005.62 9.04 0.00 0.00 0.00 0.00 12:03:40 11:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:03:40 11:58:01 lo 5.20 5.20 0.49 0.49 0.00 0.00 0.00 0.00 12:03:40 11:58:01 ens3 112.38 82.79 2420.56 10.85 0.00 0.00 0.00 0.00 12:03:40 11:58:01 br-0f0b718c2412 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:03:40 11:58:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:03:40 11:59:01 lo 6.73 6.73 0.68 0.68 0.00 0.00 0.00 0.00 12:03:40 11:59:01 ens3 813.88 459.37 19255.92 34.09 0.00 0.00 0.00 0.00 12:03:40 11:59:01 br-0f0b718c2412 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:03:40 11:59:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:03:40 12:00:01 vethcf60905 0.10 0.43 0.01 0.02 0.00 0.00 0.00 0.00 12:03:40 12:00:01 lo 2.67 2.67 0.23 0.23 0.00 0.00 0.00 0.00 12:03:40 12:00:01 vethb35b943 54.67 64.76 19.21 16.03 0.00 0.00 0.00 0.00 12:03:40 12:00:01 veth6d7739d 1.83 1.90 0.18 0.19 0.00 0.00 0.00 0.00 12:03:40 12:01:01 vethcf60905 0.48 0.48 0.05 1.37 0.00 0.00 0.00 0.00 12:03:40 12:01:01 lo 5.17 5.17 3.51 3.51 0.00 0.00 0.00 0.00 12:03:40 12:01:01 vethb35b943 51.82 62.66 57.90 15.34 0.00 0.00 0.00 0.00 12:03:40 12:01:01 veth6d7739d 18.20 15.03 2.16 2.24 0.00 0.00 0.00 0.00 12:03:40 12:02:01 vethcf60905 0.58 0.60 0.05 1.52 0.00 0.00 0.00 0.00 12:03:40 12:02:01 lo 5.18 5.18 0.38 0.38 0.00 0.00 0.00 0.00 12:03:40 12:02:01 vethb35b943 1.53 1.72 0.54 0.39 0.00 0.00 0.00 0.00 12:03:40 12:02:01 veth6d7739d 13.93 9.38 1.06 1.34 0.00 0.00 0.00 0.00 12:03:40 12:03:01 lo 4.87 4.87 0.45 0.45 0.00 0.00 0.00 0.00 12:03:40 12:03:01 ens3 1868.07 1096.07 37243.74 158.15 0.00 0.00 0.00 0.00 12:03:40 12:03:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:03:40 Average: lo 4.45 4.45 0.84 0.84 0.00 0.00 0.00 0.00 12:03:40 Average: ens3 214.06 124.89 5209.94 14.53 0.00 0.00 0.00 0.00 12:03:40 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:03:40 12:03:40 12:03:40 ---> sar -P ALL: 12:03:40 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-14552) 01/23/24 _x86_64_ (8 CPU) 12:03:40 12:03:40 11:55:14 LINUX RESTART (8 CPU) 12:03:40 12:03:40 11:56:01 CPU %user %nice %system %iowait %steal %idle 12:03:40 11:57:01 all 8.42 0.00 0.61 3.06 0.11 87.80 12:03:40 11:57:01 0 3.54 0.00 0.30 0.08 0.03 96.04 12:03:40 11:57:01 1 4.20 0.00 0.37 0.10 0.02 95.31 12:03:40 11:57:01 2 0.70 0.00 0.12 13.95 0.02 85.22 12:03:40 11:57:01 3 13.76 0.00 0.91 1.51 0.68 83.14 12:03:40 11:57:01 4 14.24 0.00 0.88 0.40 0.03 84.44 12:03:40 11:57:01 5 6.18 0.00 0.50 0.65 0.03 92.63 12:03:40 11:57:01 6 9.46 0.00 0.62 0.85 0.05 89.02 12:03:40 11:57:01 7 15.21 0.00 1.15 6.98 0.05 76.61 12:03:40 11:58:01 all 9.28 0.00 1.03 4.32 0.04 85.32 12:03:40 11:58:01 0 6.88 0.00 1.12 0.08 0.03 91.88 12:03:40 11:58:01 1 2.40 0.00 0.45 0.00 0.05 97.10 12:03:40 11:58:01 2 1.42 0.00 0.43 15.33 0.02 82.80 12:03:40 11:58:01 3 11.01 0.00 1.13 0.90 0.05 86.91 12:03:40 11:58:01 4 8.84 0.00 1.35 2.47 0.03 87.30 12:03:40 11:58:01 5 13.83 0.00 1.07 0.55 0.03 84.52 12:03:40 11:58:01 6 11.61 0.00 1.18 2.24 0.03 84.94 12:03:40 11:58:01 7 18.29 0.00 1.50 12.96 0.05 67.19 12:03:40 11:59:01 all 9.71 0.00 4.11 9.18 0.07 76.93 12:03:40 11:59:01 0 10.39 0.00 5.16 25.65 0.07 58.73 12:03:40 11:59:01 1 9.60 0.00 4.24 9.96 0.07 76.12 12:03:40 11:59:01 2 10.96 0.00 5.06 18.64 0.07 65.28 12:03:40 11:59:01 3 10.42 0.00 3.72 0.00 0.07 85.79 12:03:40 11:59:01 4 8.53 0.00 4.34 0.24 0.07 86.82 12:03:40 11:59:01 5 9.45 0.00 4.16 2.37 0.05 83.98 12:03:40 11:59:01 6 10.70 0.00 3.37 4.82 0.07 81.04 12:03:40 11:59:01 7 7.64 0.00 2.86 11.83 0.05 77.63 12:03:40 12:00:01 all 20.10 0.00 4.41 7.92 0.08 67.49 12:03:40 12:00:01 0 14.40 0.00 3.92 1.03 0.07 80.58 12:03:40 12:00:01 1 25.62 0.00 5.72 34.49 0.10 34.07 12:03:40 12:00:01 2 21.13 0.00 4.16 1.35 0.07 73.30 12:03:40 12:00:01 3 18.59 0.00 3.77 1.93 0.07 75.64 12:03:40 12:00:01 4 23.87 0.00 5.67 2.34 0.08 68.04 12:03:40 12:00:01 5 21.31 0.00 4.34 3.10 0.08 71.16 12:03:40 12:00:01 6 16.52 0.00 3.84 1.43 0.08 78.13 12:03:40 12:00:01 7 19.43 0.00 3.88 17.81 0.08 58.79 12:03:40 12:01:01 all 16.95 0.00 1.55 1.01 0.06 80.44 12:03:40 12:01:01 0 18.77 0.00 1.99 0.02 0.07 79.15 12:03:40 12:01:01 1 17.85 0.00 1.64 3.61 0.05 76.85 12:03:40 12:01:01 2 20.77 0.00 1.61 0.07 0.05 77.50 12:03:40 12:01:01 3 17.93 0.00 1.47 3.28 0.07 77.25 12:03:40 12:01:01 4 16.73 0.00 1.48 0.03 0.07 81.69 12:03:40 12:01:01 5 13.38 0.00 1.20 0.07 0.07 85.28 12:03:40 12:01:01 6 16.01 0.00 1.42 0.36 0.07 82.13 12:03:40 12:01:01 7 14.12 0.00 1.54 0.69 0.05 83.61 12:03:40 12:02:01 all 1.16 0.00 0.15 0.99 0.04 97.66 12:03:40 12:02:01 0 0.72 0.00 0.20 0.00 0.05 99.03 12:03:40 12:02:01 1 0.55 0.00 0.20 4.77 0.03 94.44 12:03:40 12:02:01 2 1.27 0.00 0.07 0.05 0.03 98.58 12:03:40 12:02:01 3 1.05 0.00 0.10 0.53 0.05 98.26 12:03:40 12:02:01 4 1.09 0.00 0.13 2.50 0.03 96.24 12:03:40 12:02:01 5 1.52 0.00 0.18 0.00 0.07 98.23 12:03:40 12:02:01 6 2.05 0.00 0.20 0.00 0.03 97.72 12:03:40 12:02:01 7 1.08 0.00 0.15 0.02 0.03 98.72 12:03:40 12:03:01 all 4.05 0.00 0.70 1.79 0.04 93.43 12:03:40 12:03:01 0 1.67 0.00 0.69 0.42 0.03 97.19 12:03:40 12:03:01 1 1.59 0.00 0.60 0.68 0.03 97.09 12:03:40 12:03:01 2 1.82 0.00 0.40 0.13 0.03 97.61 12:03:40 12:03:01 3 2.13 0.00 0.47 0.64 0.05 96.72 12:03:40 12:03:01 4 0.85 0.00 0.75 11.07 0.03 87.30 12:03:40 12:03:01 5 2.52 0.00 0.70 0.02 0.03 96.73 12:03:40 12:03:01 6 19.69 0.00 1.24 0.89 0.07 78.12 12:03:40 12:03:01 7 2.10 0.00 0.70 0.50 0.03 96.66 12:03:40 Average: all 9.94 0.00 1.79 4.03 0.06 84.18 12:03:40 Average: 0 8.04 0.00 1.90 3.86 0.05 86.15 12:03:40 Average: 1 8.80 0.00 1.88 7.61 0.05 81.66 12:03:40 Average: 2 8.27 0.00 1.68 7.06 0.04 82.94 12:03:40 Average: 3 10.71 0.00 1.65 1.26 0.15 86.23 12:03:40 Average: 4 10.57 0.00 2.08 2.73 0.05 84.57 12:03:40 Average: 5 9.73 0.00 1.73 0.96 0.05 87.53 12:03:40 Average: 6 12.29 0.00 1.69 1.51 0.06 84.46 12:03:40 Average: 7 11.12 0.00 1.68 7.23 0.05 79.93 12:03:40 12:03:40 12:03:40