23:11:00 Started by timer 23:11:00 Running as SYSTEM 23:11:00 [EnvInject] - Loading node environment variables. 23:11:00 Building remotely on prd-ubuntu1804-docker-8c-8g-24270 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 23:11:00 [ssh-agent] Looking for ssh-agent implementation... 23:11:00 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 23:11:00 $ ssh-agent 23:11:00 SSH_AUTH_SOCK=/tmp/ssh-vO4JGgQV3mD9/agent.2084 23:11:00 SSH_AGENT_PID=2086 23:11:00 [ssh-agent] Started. 23:11:00 Running ssh-add (command line suppressed) 23:11:00 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_9284425168308043501.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_9284425168308043501.key) 23:11:00 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 23:11:00 The recommended git tool is: NONE 23:11:02 using credential onap-jenkins-ssh 23:11:02 Wiping out workspace first. 23:11:02 Cloning the remote Git repository 23:11:02 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 23:11:02 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 23:11:02 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 23:11:02 > git --version # timeout=10 23:11:02 > git --version # 'git version 2.17.1' 23:11:02 using GIT_SSH to set credentials Gerrit user 23:11:02 Verifying host key using manually-configured host key entries 23:11:02 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 23:11:02 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 23:11:02 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 23:11:03 Avoid second fetch 23:11:03 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 23:11:03 Checking out Revision f35d01581c8da55946d604e5a444972fe4b0d318 (refs/remotes/origin/master) 23:11:03 > git config core.sparsecheckout # timeout=10 23:11:03 > git checkout -f f35d01581c8da55946d604e5a444972fe4b0d318 # timeout=30 23:11:03 Commit message: "Improvements to CSIT" 23:11:03 > git rev-list --no-walk f35d01581c8da55946d604e5a444972fe4b0d318 # timeout=10 23:11:03 provisioning config files... 23:11:03 copy managed file [npmrc] to file:/home/jenkins/.npmrc 23:11:03 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 23:11:03 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1929568632304465149.sh 23:11:03 ---> python-tools-install.sh 23:11:03 Setup pyenv: 23:11:03 * system (set by /opt/pyenv/version) 23:11:03 * 3.8.13 (set by /opt/pyenv/version) 23:11:03 * 3.9.13 (set by /opt/pyenv/version) 23:11:03 * 3.10.6 (set by /opt/pyenv/version) 23:11:08 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-H8Lq 23:11:08 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 23:11:11 lf-activate-venv(): INFO: Installing: lftools 23:11:47 lf-activate-venv(): INFO: Adding /tmp/venv-H8Lq/bin to PATH 23:11:47 Generating Requirements File 23:12:18 Python 3.10.6 23:12:19 pip 24.0 from /tmp/venv-H8Lq/lib/python3.10/site-packages/pip (python 3.10) 23:12:19 appdirs==1.4.4 23:12:19 argcomplete==3.3.0 23:12:19 aspy.yaml==1.3.0 23:12:19 attrs==23.2.0 23:12:19 autopage==0.5.2 23:12:19 beautifulsoup4==4.12.3 23:12:19 boto3==1.34.87 23:12:19 botocore==1.34.87 23:12:19 bs4==0.0.2 23:12:19 cachetools==5.3.3 23:12:19 certifi==2024.2.2 23:12:19 cffi==1.16.0 23:12:19 cfgv==3.4.0 23:12:19 chardet==5.2.0 23:12:19 charset-normalizer==3.3.2 23:12:19 click==8.1.7 23:12:19 cliff==4.6.0 23:12:19 cmd2==2.4.3 23:12:19 cryptography==3.3.2 23:12:19 debtcollector==3.0.0 23:12:19 decorator==5.1.1 23:12:19 defusedxml==0.7.1 23:12:19 Deprecated==1.2.14 23:12:19 distlib==0.3.8 23:12:19 dnspython==2.6.1 23:12:19 docker==4.2.2 23:12:19 dogpile.cache==1.3.2 23:12:19 email_validator==2.1.1 23:12:19 filelock==3.13.4 23:12:19 future==1.0.0 23:12:19 gitdb==4.0.11 23:12:19 GitPython==3.1.43 23:12:19 google-auth==2.29.0 23:12:19 httplib2==0.22.0 23:12:19 identify==2.5.35 23:12:19 idna==3.7 23:12:19 importlib-resources==1.5.0 23:12:19 iso8601==2.1.0 23:12:19 Jinja2==3.1.3 23:12:19 jmespath==1.0.1 23:12:19 jsonpatch==1.33 23:12:19 jsonpointer==2.4 23:12:19 jsonschema==4.21.1 23:12:19 jsonschema-specifications==2023.12.1 23:12:19 keystoneauth1==5.6.0 23:12:19 kubernetes==29.0.0 23:12:19 lftools==0.37.10 23:12:19 lxml==5.2.1 23:12:19 MarkupSafe==2.1.5 23:12:19 msgpack==1.0.8 23:12:19 multi_key_dict==2.0.3 23:12:19 munch==4.0.0 23:12:19 netaddr==1.2.1 23:12:19 netifaces==0.11.0 23:12:19 niet==1.4.2 23:12:19 nodeenv==1.8.0 23:12:19 oauth2client==4.1.3 23:12:19 oauthlib==3.2.2 23:12:19 openstacksdk==3.1.0 23:12:19 os-client-config==2.1.0 23:12:19 os-service-types==1.7.0 23:12:19 osc-lib==3.0.1 23:12:19 oslo.config==9.4.0 23:12:19 oslo.context==5.5.0 23:12:19 oslo.i18n==6.3.0 23:12:19 oslo.log==5.5.1 23:12:19 oslo.serialization==5.4.0 23:12:19 oslo.utils==7.1.0 23:12:19 packaging==24.0 23:12:19 pbr==6.0.0 23:12:19 platformdirs==4.2.0 23:12:19 prettytable==3.10.0 23:12:19 pyasn1==0.6.0 23:12:19 pyasn1_modules==0.4.0 23:12:19 pycparser==2.22 23:12:19 pygerrit2==2.0.15 23:12:19 PyGithub==2.3.0 23:12:19 pyinotify==0.9.6 23:12:19 PyJWT==2.8.0 23:12:19 PyNaCl==1.5.0 23:12:19 pyparsing==2.4.7 23:12:19 pyperclip==1.8.2 23:12:19 pyrsistent==0.20.0 23:12:19 python-cinderclient==9.5.0 23:12:19 python-dateutil==2.9.0.post0 23:12:19 python-heatclient==3.5.0 23:12:19 python-jenkins==1.8.2 23:12:19 python-keystoneclient==5.4.0 23:12:19 python-magnumclient==4.4.0 23:12:19 python-novaclient==18.6.0 23:12:19 python-openstackclient==6.6.0 23:12:19 python-swiftclient==4.5.0 23:12:19 PyYAML==6.0.1 23:12:19 referencing==0.34.0 23:12:19 requests==2.31.0 23:12:19 requests-oauthlib==2.0.0 23:12:19 requestsexceptions==1.4.0 23:12:19 rfc3986==2.0.0 23:12:19 rpds-py==0.18.0 23:12:19 rsa==4.9 23:12:19 ruamel.yaml==0.18.6 23:12:19 ruamel.yaml.clib==0.2.8 23:12:19 s3transfer==0.10.1 23:12:19 simplejson==3.19.2 23:12:19 six==1.16.0 23:12:19 smmap==5.0.1 23:12:19 soupsieve==2.5 23:12:19 stevedore==5.2.0 23:12:19 tabulate==0.9.0 23:12:19 toml==0.10.2 23:12:19 tomlkit==0.12.4 23:12:19 tqdm==4.66.2 23:12:19 typing_extensions==4.11.0 23:12:19 tzdata==2024.1 23:12:19 urllib3==1.26.18 23:12:19 virtualenv==20.25.3 23:12:19 wcwidth==0.2.13 23:12:19 websocket-client==1.7.0 23:12:19 wrapt==1.16.0 23:12:19 xdg==6.0.0 23:12:19 xmltodict==0.13.0 23:12:19 yq==3.4.1 23:12:19 [EnvInject] - Injecting environment variables from a build step. 23:12:19 [EnvInject] - Injecting as environment variables the properties content 23:12:19 SET_JDK_VERSION=openjdk17 23:12:19 GIT_URL="git://cloud.onap.org/mirror" 23:12:19 23:12:19 [EnvInject] - Variables injected successfully. 23:12:19 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins15546175447872111146.sh 23:12:19 ---> update-java-alternatives.sh 23:12:19 ---> Updating Java version 23:12:19 ---> Ubuntu/Debian system detected 23:12:19 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 23:12:19 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 23:12:19 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 23:12:20 openjdk version "17.0.4" 2022-07-19 23:12:20 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 23:12:20 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 23:12:20 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 23:12:20 [EnvInject] - Injecting environment variables from a build step. 23:12:20 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 23:12:20 [EnvInject] - Variables injected successfully. 23:12:20 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins279420822397733023.sh 23:12:20 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 23:12:20 + set +u 23:12:20 + save_set 23:12:20 + RUN_CSIT_SAVE_SET=ehxB 23:12:20 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 23:12:20 + '[' 1 -eq 0 ']' 23:12:20 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:20 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:20 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:20 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:20 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:20 + export ROBOT_VARIABLES= 23:12:20 + ROBOT_VARIABLES= 23:12:20 + export PROJECT=pap 23:12:20 + PROJECT=pap 23:12:20 + cd /w/workspace/policy-pap-master-project-csit-pap 23:12:20 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:20 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:20 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:20 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 23:12:20 + relax_set 23:12:20 + set +e 23:12:20 + set +o pipefail 23:12:20 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:20 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:20 +++ mktemp -d 23:12:20 ++ ROBOT_VENV=/tmp/tmp.yGPf2tn4SQ 23:12:20 ++ echo ROBOT_VENV=/tmp/tmp.yGPf2tn4SQ 23:12:20 +++ python3 --version 23:12:20 ++ echo 'Python version is: Python 3.6.9' 23:12:20 Python version is: Python 3.6.9 23:12:20 ++ python3 -m venv --clear /tmp/tmp.yGPf2tn4SQ 23:12:21 ++ source /tmp/tmp.yGPf2tn4SQ/bin/activate 23:12:21 +++ deactivate nondestructive 23:12:21 +++ '[' -n '' ']' 23:12:21 +++ '[' -n '' ']' 23:12:21 +++ '[' -n /bin/bash -o -n '' ']' 23:12:21 +++ hash -r 23:12:21 +++ '[' -n '' ']' 23:12:21 +++ unset VIRTUAL_ENV 23:12:21 +++ '[' '!' nondestructive = nondestructive ']' 23:12:21 +++ VIRTUAL_ENV=/tmp/tmp.yGPf2tn4SQ 23:12:21 +++ export VIRTUAL_ENV 23:12:21 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:21 +++ PATH=/tmp/tmp.yGPf2tn4SQ/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:21 +++ export PATH 23:12:21 +++ '[' -n '' ']' 23:12:21 +++ '[' -z '' ']' 23:12:21 +++ _OLD_VIRTUAL_PS1= 23:12:21 +++ '[' 'x(tmp.yGPf2tn4SQ) ' '!=' x ']' 23:12:21 +++ PS1='(tmp.yGPf2tn4SQ) ' 23:12:21 +++ export PS1 23:12:21 +++ '[' -n /bin/bash -o -n '' ']' 23:12:21 +++ hash -r 23:12:21 ++ set -exu 23:12:21 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 23:12:25 ++ echo 'Installing Python Requirements' 23:12:25 Installing Python Requirements 23:12:25 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 23:12:44 ++ python3 -m pip -qq freeze 23:12:44 bcrypt==4.0.1 23:12:44 beautifulsoup4==4.12.3 23:12:44 bitarray==2.9.2 23:12:44 certifi==2024.2.2 23:12:44 cffi==1.15.1 23:12:44 charset-normalizer==2.0.12 23:12:44 cryptography==40.0.2 23:12:44 decorator==5.1.1 23:12:44 elasticsearch==7.17.9 23:12:44 elasticsearch-dsl==7.4.1 23:12:44 enum34==1.1.10 23:12:44 idna==3.7 23:12:44 importlib-resources==5.4.0 23:12:44 ipaddr==2.2.0 23:12:44 isodate==0.6.1 23:12:44 jmespath==0.10.0 23:12:44 jsonpatch==1.32 23:12:44 jsonpath-rw==1.4.0 23:12:44 jsonpointer==2.3 23:12:44 lxml==5.2.1 23:12:44 netaddr==0.8.0 23:12:44 netifaces==0.11.0 23:12:44 odltools==0.1.28 23:12:44 paramiko==3.4.0 23:12:44 pkg_resources==0.0.0 23:12:44 ply==3.11 23:12:44 pyang==2.6.0 23:12:44 pyangbind==0.8.1 23:12:44 pycparser==2.21 23:12:44 pyhocon==0.3.60 23:12:44 PyNaCl==1.5.0 23:12:44 pyparsing==3.1.2 23:12:44 python-dateutil==2.9.0.post0 23:12:44 regex==2023.8.8 23:12:44 requests==2.27.1 23:12:44 robotframework==6.1.1 23:12:44 robotframework-httplibrary==0.4.2 23:12:44 robotframework-pythonlibcore==3.0.0 23:12:44 robotframework-requests==0.9.4 23:12:44 robotframework-selenium2library==3.0.0 23:12:44 robotframework-seleniumlibrary==5.1.3 23:12:44 robotframework-sshlibrary==3.8.0 23:12:44 scapy==2.5.0 23:12:44 scp==0.14.5 23:12:44 selenium==3.141.0 23:12:44 six==1.16.0 23:12:44 soupsieve==2.3.2.post1 23:12:44 urllib3==1.26.18 23:12:44 waitress==2.0.0 23:12:44 WebOb==1.8.7 23:12:44 WebTest==3.0.0 23:12:44 zipp==3.6.0 23:12:44 ++ mkdir -p /tmp/tmp.yGPf2tn4SQ/src/onap 23:12:44 ++ rm -rf /tmp/tmp.yGPf2tn4SQ/src/onap/testsuite 23:12:44 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 23:12:50 ++ echo 'Installing python confluent-kafka library' 23:12:50 Installing python confluent-kafka library 23:12:50 ++ python3 -m pip install -qq confluent-kafka 23:12:52 ++ echo 'Uninstall docker-py and reinstall docker.' 23:12:52 Uninstall docker-py and reinstall docker. 23:12:52 ++ python3 -m pip uninstall -y -qq docker 23:12:52 ++ python3 -m pip install -U -qq docker 23:12:53 ++ python3 -m pip -qq freeze 23:12:54 bcrypt==4.0.1 23:12:54 beautifulsoup4==4.12.3 23:12:54 bitarray==2.9.2 23:12:54 certifi==2024.2.2 23:12:54 cffi==1.15.1 23:12:54 charset-normalizer==2.0.12 23:12:54 confluent-kafka==2.3.0 23:12:54 cryptography==40.0.2 23:12:54 decorator==5.1.1 23:12:54 deepdiff==5.7.0 23:12:54 dnspython==2.2.1 23:12:54 docker==5.0.3 23:12:54 elasticsearch==7.17.9 23:12:54 elasticsearch-dsl==7.4.1 23:12:54 enum34==1.1.10 23:12:54 future==1.0.0 23:12:54 idna==3.7 23:12:54 importlib-resources==5.4.0 23:12:54 ipaddr==2.2.0 23:12:54 isodate==0.6.1 23:12:54 Jinja2==3.0.3 23:12:54 jmespath==0.10.0 23:12:54 jsonpatch==1.32 23:12:54 jsonpath-rw==1.4.0 23:12:54 jsonpointer==2.3 23:12:54 kafka-python==2.0.2 23:12:54 lxml==5.2.1 23:12:54 MarkupSafe==2.0.1 23:12:54 more-itertools==5.0.0 23:12:54 netaddr==0.8.0 23:12:54 netifaces==0.11.0 23:12:54 odltools==0.1.28 23:12:54 ordered-set==4.0.2 23:12:54 paramiko==3.4.0 23:12:54 pbr==6.0.0 23:12:54 pkg_resources==0.0.0 23:12:54 ply==3.11 23:12:54 protobuf==3.19.6 23:12:54 pyang==2.6.0 23:12:54 pyangbind==0.8.1 23:12:54 pycparser==2.21 23:12:54 pyhocon==0.3.60 23:12:54 PyNaCl==1.5.0 23:12:54 pyparsing==3.1.2 23:12:54 python-dateutil==2.9.0.post0 23:12:54 PyYAML==6.0.1 23:12:54 regex==2023.8.8 23:12:54 requests==2.27.1 23:12:54 robotframework==6.1.1 23:12:54 robotframework-httplibrary==0.4.2 23:12:54 robotframework-onap==0.6.0.dev105 23:12:54 robotframework-pythonlibcore==3.0.0 23:12:54 robotframework-requests==0.9.4 23:12:54 robotframework-selenium2library==3.0.0 23:12:54 robotframework-seleniumlibrary==5.1.3 23:12:54 robotframework-sshlibrary==3.8.0 23:12:54 robotlibcore-temp==1.0.2 23:12:54 scapy==2.5.0 23:12:54 scp==0.14.5 23:12:54 selenium==3.141.0 23:12:54 six==1.16.0 23:12:54 soupsieve==2.3.2.post1 23:12:54 urllib3==1.26.18 23:12:54 waitress==2.0.0 23:12:54 WebOb==1.8.7 23:12:54 websocket-client==1.3.1 23:12:54 WebTest==3.0.0 23:12:54 zipp==3.6.0 23:12:54 ++ uname 23:12:54 ++ grep -q Linux 23:12:54 ++ sudo apt-get -y -qq install libxml2-utils 23:12:54 + load_set 23:12:54 + _setopts=ehuxB 23:12:54 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 23:12:54 ++ tr : ' ' 23:12:54 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:54 + set +o braceexpand 23:12:54 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:54 + set +o hashall 23:12:54 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:54 + set +o interactive-comments 23:12:54 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:54 + set +o nounset 23:12:54 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:54 + set +o xtrace 23:12:54 ++ echo ehuxB 23:12:54 ++ sed 's/./& /g' 23:12:54 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:54 + set +e 23:12:54 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:54 + set +h 23:12:54 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:54 + set +u 23:12:54 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:54 + set +x 23:12:54 + source_safely /tmp/tmp.yGPf2tn4SQ/bin/activate 23:12:54 + '[' -z /tmp/tmp.yGPf2tn4SQ/bin/activate ']' 23:12:54 + relax_set 23:12:54 + set +e 23:12:54 + set +o pipefail 23:12:54 + . /tmp/tmp.yGPf2tn4SQ/bin/activate 23:12:54 ++ deactivate nondestructive 23:12:54 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 23:12:54 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:54 ++ export PATH 23:12:54 ++ unset _OLD_VIRTUAL_PATH 23:12:54 ++ '[' -n '' ']' 23:12:54 ++ '[' -n /bin/bash -o -n '' ']' 23:12:54 ++ hash -r 23:12:54 ++ '[' -n '' ']' 23:12:54 ++ unset VIRTUAL_ENV 23:12:54 ++ '[' '!' nondestructive = nondestructive ']' 23:12:54 ++ VIRTUAL_ENV=/tmp/tmp.yGPf2tn4SQ 23:12:54 ++ export VIRTUAL_ENV 23:12:54 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:54 ++ PATH=/tmp/tmp.yGPf2tn4SQ/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:54 ++ export PATH 23:12:54 ++ '[' -n '' ']' 23:12:54 ++ '[' -z '' ']' 23:12:54 ++ _OLD_VIRTUAL_PS1='(tmp.yGPf2tn4SQ) ' 23:12:54 ++ '[' 'x(tmp.yGPf2tn4SQ) ' '!=' x ']' 23:12:54 ++ PS1='(tmp.yGPf2tn4SQ) (tmp.yGPf2tn4SQ) ' 23:12:54 ++ export PS1 23:12:54 ++ '[' -n /bin/bash -o -n '' ']' 23:12:54 ++ hash -r 23:12:54 + load_set 23:12:54 + _setopts=hxB 23:12:54 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:12:54 ++ tr : ' ' 23:12:54 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:54 + set +o braceexpand 23:12:54 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:54 + set +o hashall 23:12:54 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:54 + set +o interactive-comments 23:12:54 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:54 + set +o xtrace 23:12:54 ++ echo hxB 23:12:54 ++ sed 's/./& /g' 23:12:54 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:54 + set +h 23:12:54 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:54 + set +x 23:12:54 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:54 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:54 + export TEST_OPTIONS= 23:12:54 + TEST_OPTIONS= 23:12:54 ++ mktemp -d 23:12:54 + WORKDIR=/tmp/tmp.K0sYyH3Udx 23:12:54 + cd /tmp/tmp.K0sYyH3Udx 23:12:54 + docker login -u docker -p docker nexus3.onap.org:10001 23:12:54 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 23:12:55 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 23:12:55 Configure a credential helper to remove this warning. See 23:12:55 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 23:12:55 23:12:55 Login Succeeded 23:12:55 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:55 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:55 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 23:12:55 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:55 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:55 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:55 + relax_set 23:12:55 + set +e 23:12:55 + set +o pipefail 23:12:55 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:55 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 23:12:55 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:55 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 23:12:55 +++ GERRIT_BRANCH=master 23:12:55 +++ echo GERRIT_BRANCH=master 23:12:55 GERRIT_BRANCH=master 23:12:55 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:12:55 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 23:12:55 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 23:12:55 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 23:12:56 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:56 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:56 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:56 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:56 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:56 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:56 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 23:12:56 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:56 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:12:56 +++ grafana=false 23:12:56 +++ gui=false 23:12:56 +++ [[ 2 -gt 0 ]] 23:12:56 +++ key=apex-pdp 23:12:56 +++ case $key in 23:12:56 +++ echo apex-pdp 23:12:56 apex-pdp 23:12:56 +++ component=apex-pdp 23:12:56 +++ shift 23:12:56 +++ [[ 1 -gt 0 ]] 23:12:56 +++ key=--grafana 23:12:56 +++ case $key in 23:12:56 +++ grafana=true 23:12:56 +++ shift 23:12:56 +++ [[ 0 -gt 0 ]] 23:12:56 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:12:56 +++ echo 'Configuring docker compose...' 23:12:56 Configuring docker compose... 23:12:56 +++ source export-ports.sh 23:12:56 +++ source get-versions.sh 23:12:58 +++ '[' -z pap ']' 23:12:58 +++ '[' -n apex-pdp ']' 23:12:58 +++ '[' apex-pdp == logs ']' 23:12:58 +++ '[' true = true ']' 23:12:58 +++ echo 'Starting apex-pdp application with Grafana' 23:12:58 Starting apex-pdp application with Grafana 23:12:58 +++ docker-compose up -d apex-pdp grafana 23:12:59 Creating network "compose_default" with the default driver 23:12:59 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 23:12:59 latest: Pulling from prom/prometheus 23:13:02 Digest: sha256:4f6c47e39a9064028766e8c95890ed15690c30f00c4ba14e7ce6ae1ded0295b1 23:13:02 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 23:13:02 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 23:13:03 latest: Pulling from grafana/grafana 23:13:08 Digest: sha256:7d5faae481a4c6f436c99e98af11534f7fd5e8d3e35213552dd1dd02bc393d2e 23:13:08 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 23:13:08 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 23:13:08 10.10.2: Pulling from mariadb 23:13:13 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 23:13:13 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 23:13:13 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 23:13:13 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator 23:13:17 Digest: sha256:d8f1d8ae67fc0b53114a44577cb43c90a3a3281908d2f2418d7fbd203413bd6a 23:13:17 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT 23:13:17 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 23:13:18 latest: Pulling from confluentinc/cp-zookeeper 23:13:29 Digest: sha256:4dc780642bfc5ec3a2d4901e2ff1f9ddef7f7c5c0b793e1e2911cbfb4e3a3214 23:13:29 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 23:13:29 Pulling kafka (confluentinc/cp-kafka:latest)... 23:13:29 latest: Pulling from confluentinc/cp-kafka 23:13:32 Digest: sha256:620734d9fc0bb1f9886932e5baf33806074469f40e3fe246a3fdbb59309535fa 23:13:32 Status: Downloaded newer image for confluentinc/cp-kafka:latest 23:13:32 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 23:13:32 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator 23:13:41 Digest: sha256:76f202a4ce3fb449efc5539e6f77655fea2bbfecb1fbc1342810b45a9f33c637 23:13:41 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT 23:13:41 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 23:13:42 3.1.2-SNAPSHOT: Pulling from onap/policy-api 23:13:44 Digest: sha256:0e8cbccfee833c5b2be68d71dd51902b884e77df24bbbac2751693f58bdc20ce 23:13:44 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT 23:13:44 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 23:13:44 3.1.2-SNAPSHOT: Pulling from onap/policy-pap 23:13:55 Digest: sha256:4424490684da433df5069c1f1dbbafe83fffd4c8b6a174807fb10d6443ecef06 23:13:55 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT 23:13:56 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 23:13:57 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp 23:14:04 Digest: sha256:75a74a87b7345e553563fbe2ececcd2285ed9500fd91489d9968ae81123b9982 23:14:04 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT 23:14:04 Creating mariadb ... 23:14:04 Creating prometheus ... 23:14:05 Creating simulator ... 23:14:05 Creating zookeeper ... 23:14:16 Creating mariadb ... done 23:14:16 Creating policy-db-migrator ... 23:14:17 Creating policy-db-migrator ... done 23:14:17 Creating policy-api ... 23:14:18 Creating policy-api ... done 23:14:18 Creating simulator ... done 23:14:19 Creating prometheus ... done 23:14:19 Creating grafana ... 23:14:20 Creating zookeeper ... done 23:14:20 Creating kafka ... 23:14:21 Creating grafana ... done 23:14:23 Creating kafka ... done 23:14:23 Creating policy-pap ... 23:14:24 Creating policy-pap ... done 23:14:24 Creating policy-apex-pdp ... 23:14:25 Creating policy-apex-pdp ... done 23:14:25 +++ echo 'Prometheus server: http://localhost:30259' 23:14:25 Prometheus server: http://localhost:30259 23:14:25 +++ echo 'Grafana server: http://localhost:30269' 23:14:25 Grafana server: http://localhost:30269 23:14:25 +++ cd /w/workspace/policy-pap-master-project-csit-pap 23:14:25 ++ sleep 10 23:14:35 ++ unset http_proxy https_proxy 23:14:35 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 23:14:35 Waiting for REST to come up on localhost port 30003... 23:14:35 NAMES STATUS 23:14:35 policy-apex-pdp Up 10 seconds 23:14:35 policy-pap Up 11 seconds 23:14:35 kafka Up 12 seconds 23:14:35 grafana Up 13 seconds 23:14:35 policy-api Up 16 seconds 23:14:35 zookeeper Up 14 seconds 23:14:35 simulator Up 16 seconds 23:14:35 mariadb Up 18 seconds 23:14:35 prometheus Up 15 seconds 23:14:40 NAMES STATUS 23:14:40 policy-apex-pdp Up 15 seconds 23:14:40 policy-pap Up 16 seconds 23:14:40 kafka Up 17 seconds 23:14:40 grafana Up 18 seconds 23:14:40 policy-api Up 21 seconds 23:14:40 zookeeper Up 19 seconds 23:14:40 simulator Up 21 seconds 23:14:40 mariadb Up 23 seconds 23:14:40 prometheus Up 20 seconds 23:14:45 NAMES STATUS 23:14:45 policy-apex-pdp Up 20 seconds 23:14:45 policy-pap Up 21 seconds 23:14:45 kafka Up 22 seconds 23:14:45 grafana Up 23 seconds 23:14:45 policy-api Up 26 seconds 23:14:45 zookeeper Up 24 seconds 23:14:45 simulator Up 26 seconds 23:14:45 mariadb Up 28 seconds 23:14:45 prometheus Up 25 seconds 23:14:50 NAMES STATUS 23:14:50 policy-apex-pdp Up 25 seconds 23:14:50 policy-pap Up 26 seconds 23:14:50 kafka Up 27 seconds 23:14:50 grafana Up 28 seconds 23:14:50 policy-api Up 31 seconds 23:14:50 zookeeper Up 29 seconds 23:14:50 simulator Up 31 seconds 23:14:50 mariadb Up 33 seconds 23:14:50 prometheus Up 30 seconds 23:14:55 NAMES STATUS 23:14:55 policy-apex-pdp Up 30 seconds 23:14:55 policy-pap Up 31 seconds 23:14:55 kafka Up 32 seconds 23:14:55 grafana Up 33 seconds 23:14:55 policy-api Up 37 seconds 23:14:55 zookeeper Up 34 seconds 23:14:55 simulator Up 36 seconds 23:14:55 mariadb Up 38 seconds 23:14:55 prometheus Up 35 seconds 23:14:55 ++ export 'SUITES=pap-test.robot 23:14:55 pap-slas.robot' 23:14:55 ++ SUITES='pap-test.robot 23:14:55 pap-slas.robot' 23:14:55 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:55 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:14:55 + load_set 23:14:55 + _setopts=hxB 23:14:55 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:14:55 ++ tr : ' ' 23:14:55 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:55 + set +o braceexpand 23:14:55 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:55 + set +o hashall 23:14:55 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:55 + set +o interactive-comments 23:14:55 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:55 + set +o xtrace 23:14:55 ++ echo hxB 23:14:55 ++ sed 's/./& /g' 23:14:55 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:14:55 + set +h 23:14:55 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:14:55 + set +x 23:14:55 + docker_stats 23:14:55 ++ uname -s 23:14:55 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 23:14:55 + '[' Linux == Darwin ']' 23:14:55 + sh -c 'top -bn1 | head -3' 23:14:55 top - 23:14:55 up 4 min, 0 users, load average: 2.82, 1.25, 0.50 23:14:55 Tasks: 207 total, 1 running, 130 sleeping, 0 stopped, 0 zombie 23:14:55 %Cpu(s): 13.0 us, 2.8 sy, 0.0 ni, 78.6 id, 5.5 wa, 0.0 hi, 0.1 si, 0.1 st 23:14:55 + echo 23:14:55 + sh -c 'free -h' 23:14:55 23:14:55 + echo 23:14:55 total used free shared buff/cache available 23:14:55 Mem: 31G 2.6G 22G 1.3M 6.2G 28G 23:14:55 Swap: 1.0G 0B 1.0G 23:14:55 23:14:55 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:14:55 NAMES STATUS 23:14:55 policy-apex-pdp Up 30 seconds 23:14:55 policy-pap Up 31 seconds 23:14:55 kafka Up 32 seconds 23:14:55 grafana Up 33 seconds 23:14:55 policy-api Up 37 seconds 23:14:55 zookeeper Up 34 seconds 23:14:55 simulator Up 37 seconds 23:14:55 mariadb Up 39 seconds 23:14:55 prometheus Up 35 seconds 23:14:55 + echo 23:14:55 + docker stats --no-stream 23:14:55 23:14:58 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:14:58 0f05ca022347 policy-apex-pdp 141.83% 191.1MiB / 31.41GiB 0.59% 7.12kB / 6.8kB 0B / 0B 48 23:14:58 cd968b74ff1a policy-pap 2.52% 513.1MiB / 31.41GiB 1.59% 31kB / 32.8kB 0B / 149MB 63 23:14:58 9fd3e2c8b405 kafka 0.77% 384.8MiB / 31.41GiB 1.20% 71.9kB / 73.7kB 0B / 500kB 83 23:14:58 b218cb6f2c5a grafana 0.03% 54.37MiB / 31.41GiB 0.17% 19.2kB / 3.58kB 0B / 24.9MB 14 23:14:58 98c23f0e8294 policy-api 0.11% 466.3MiB / 31.41GiB 1.45% 988kB / 646kB 0B / 0B 52 23:14:58 00ec3651d3eb zookeeper 0.09% 101.2MiB / 31.41GiB 0.31% 56.5kB / 51.2kB 0B / 479kB 60 23:14:58 1bff2dcb8737 simulator 0.07% 121.3MiB / 31.41GiB 0.38% 1.27kB / 0B 0B / 0B 76 23:14:58 3da3a3116834 mariadb 0.02% 102MiB / 31.41GiB 0.32% 934kB / 1.18MB 11MB / 57MB 40 23:14:58 b174b6fc7f4a prometheus 0.22% 18.21MiB / 31.41GiB 0.06% 1.52kB / 432B 225kB / 0B 12 23:14:58 + echo 23:14:58 23:14:58 + cd /tmp/tmp.K0sYyH3Udx 23:14:58 + echo 'Reading the testplan:' 23:14:58 Reading the testplan: 23:14:58 + echo 'pap-test.robot 23:14:58 pap-slas.robot' 23:14:58 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 23:14:58 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 23:14:58 + cat testplan.txt 23:14:58 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 23:14:58 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:14:58 ++ xargs 23:14:58 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 23:14:58 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:58 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:14:58 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:58 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:14:58 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 23:14:58 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 23:14:58 + relax_set 23:14:58 + set +e 23:14:58 + set +o pipefail 23:14:58 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:14:58 ============================================================================== 23:14:58 pap 23:14:58 ============================================================================== 23:14:58 pap.Pap-Test 23:14:58 ============================================================================== 23:14:59 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 23:14:59 ------------------------------------------------------------------------------ 23:15:00 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 23:15:00 ------------------------------------------------------------------------------ 23:15:00 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 23:15:00 ------------------------------------------------------------------------------ 23:15:00 Healthcheck :: Verify policy pap health check | PASS | 23:15:00 ------------------------------------------------------------------------------ 23:15:21 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 23:15:21 ------------------------------------------------------------------------------ 23:15:21 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 23:15:21 ------------------------------------------------------------------------------ 23:15:22 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 23:15:22 ------------------------------------------------------------------------------ 23:15:22 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 23:15:22 ------------------------------------------------------------------------------ 23:15:22 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 23:15:22 ------------------------------------------------------------------------------ 23:15:22 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 23:15:22 ------------------------------------------------------------------------------ 23:15:22 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 23:15:22 ------------------------------------------------------------------------------ 23:15:23 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 23:15:23 ------------------------------------------------------------------------------ 23:15:23 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 23:15:23 ------------------------------------------------------------------------------ 23:15:23 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 23:15:23 ------------------------------------------------------------------------------ 23:15:23 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 23:15:23 ------------------------------------------------------------------------------ 23:15:23 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 23:15:23 ------------------------------------------------------------------------------ 23:15:24 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 23:15:24 ------------------------------------------------------------------------------ 23:15:44 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 23:15:44 ------------------------------------------------------------------------------ 23:15:44 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 23:15:44 ------------------------------------------------------------------------------ 23:15:44 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 23:15:44 ------------------------------------------------------------------------------ 23:15:44 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 23:15:44 ------------------------------------------------------------------------------ 23:15:44 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 23:15:44 ------------------------------------------------------------------------------ 23:15:44 pap.Pap-Test | PASS | 23:15:44 22 tests, 22 passed, 0 failed 23:15:44 ============================================================================== 23:15:44 pap.Pap-Slas 23:15:44 ============================================================================== 23:16:44 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 23:16:44 ------------------------------------------------------------------------------ 23:16:44 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 23:16:44 ------------------------------------------------------------------------------ 23:16:44 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 23:16:44 ------------------------------------------------------------------------------ 23:16:44 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 23:16:44 ------------------------------------------------------------------------------ 23:16:44 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 23:16:44 ------------------------------------------------------------------------------ 23:16:44 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 23:16:44 ------------------------------------------------------------------------------ 23:16:44 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 23:16:44 ------------------------------------------------------------------------------ 23:16:44 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 23:16:44 ------------------------------------------------------------------------------ 23:16:44 pap.Pap-Slas | PASS | 23:16:44 8 tests, 8 passed, 0 failed 23:16:44 ============================================================================== 23:16:44 pap | PASS | 23:16:44 30 tests, 30 passed, 0 failed 23:16:44 ============================================================================== 23:16:44 Output: /tmp/tmp.K0sYyH3Udx/output.xml 23:16:44 Log: /tmp/tmp.K0sYyH3Udx/log.html 23:16:44 Report: /tmp/tmp.K0sYyH3Udx/report.html 23:16:44 + RESULT=0 23:16:44 + load_set 23:16:44 + _setopts=hxB 23:16:44 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:16:44 ++ tr : ' ' 23:16:44 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:44 + set +o braceexpand 23:16:44 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:44 + set +o hashall 23:16:44 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:44 + set +o interactive-comments 23:16:44 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:44 + set +o xtrace 23:16:44 ++ echo hxB 23:16:44 ++ sed 's/./& /g' 23:16:44 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:44 + set +h 23:16:44 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:44 + set +x 23:16:44 + echo 'RESULT: 0' 23:16:44 RESULT: 0 23:16:44 + exit 0 23:16:44 + on_exit 23:16:44 + rc=0 23:16:44 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 23:16:44 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:44 NAMES STATUS 23:16:44 policy-apex-pdp Up 2 minutes 23:16:44 policy-pap Up 2 minutes 23:16:44 kafka Up 2 minutes 23:16:44 grafana Up 2 minutes 23:16:44 policy-api Up 2 minutes 23:16:44 zookeeper Up 2 minutes 23:16:44 simulator Up 2 minutes 23:16:44 mariadb Up 2 minutes 23:16:44 prometheus Up 2 minutes 23:16:44 + docker_stats 23:16:44 ++ uname -s 23:16:44 + '[' Linux == Darwin ']' 23:16:44 + sh -c 'top -bn1 | head -3' 23:16:45 top - 23:16:45 up 6 min, 0 users, load average: 0.59, 0.92, 0.46 23:16:45 Tasks: 196 total, 1 running, 128 sleeping, 0 stopped, 0 zombie 23:16:45 %Cpu(s): 10.7 us, 2.1 sy, 0.0 ni, 83.1 id, 3.9 wa, 0.0 hi, 0.1 si, 0.1 st 23:16:45 + echo 23:16:45 23:16:45 + sh -c 'free -h' 23:16:45 total used free shared buff/cache available 23:16:45 Mem: 31G 2.7G 22G 1.3M 6.2G 28G 23:16:45 Swap: 1.0G 0B 1.0G 23:16:45 + echo 23:16:45 23:16:45 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:45 NAMES STATUS 23:16:45 policy-apex-pdp Up 2 minutes 23:16:45 policy-pap Up 2 minutes 23:16:45 kafka Up 2 minutes 23:16:45 grafana Up 2 minutes 23:16:45 policy-api Up 2 minutes 23:16:45 zookeeper Up 2 minutes 23:16:45 simulator Up 2 minutes 23:16:45 mariadb Up 2 minutes 23:16:45 prometheus Up 2 minutes 23:16:45 + echo 23:16:45 23:16:45 + docker stats --no-stream 23:16:47 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:16:47 0f05ca022347 policy-apex-pdp 0.39% 179.9MiB / 31.41GiB 0.56% 55.2kB / 79kB 0B / 0B 52 23:16:47 cd968b74ff1a policy-pap 0.67% 501.7MiB / 31.41GiB 1.56% 2.47MB / 1.05MB 0B / 149MB 67 23:16:47 9fd3e2c8b405 kafka 1.14% 407.3MiB / 31.41GiB 1.27% 241kB / 216kB 0B / 606kB 85 23:16:47 b218cb6f2c5a grafana 0.06% 57.4MiB / 31.41GiB 0.18% 20kB / 4.53kB 0B / 24.9MB 14 23:16:47 98c23f0e8294 policy-api 0.10% 471.7MiB / 31.41GiB 1.47% 2.45MB / 1.1MB 0B / 0B 55 23:16:47 00ec3651d3eb zookeeper 0.10% 101.2MiB / 31.41GiB 0.31% 59.3kB / 52.7kB 0B / 479kB 60 23:16:47 1bff2dcb8737 simulator 0.07% 121.5MiB / 31.41GiB 0.38% 1.5kB / 0B 0B / 0B 78 23:16:47 3da3a3116834 mariadb 0.02% 103.2MiB / 31.41GiB 0.32% 2.02MB / 4.87MB 11MB / 57.2MB 28 23:16:47 b174b6fc7f4a prometheus 0.00% 24.77MiB / 31.41GiB 0.08% 180kB / 10.1kB 225kB / 0B 12 23:16:47 + echo 23:16:47 23:16:47 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:47 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 23:16:47 + relax_set 23:16:47 + set +e 23:16:47 + set +o pipefail 23:16:47 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:47 ++ echo 'Shut down started!' 23:16:47 Shut down started! 23:16:47 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:16:47 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:16:47 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:16:47 ++ source export-ports.sh 23:16:47 ++ source get-versions.sh 23:16:50 ++ echo 'Collecting logs from docker compose containers...' 23:16:50 Collecting logs from docker compose containers... 23:16:50 ++ docker-compose logs 23:16:52 ++ cat docker_compose.log 23:16:52 Attaching to policy-apex-pdp, policy-pap, kafka, grafana, policy-api, policy-db-migrator, zookeeper, simulator, mariadb, prometheus 23:16:52 grafana | logger=settings t=2024-04-18T23:14:21.948593789Z level=info msg="Starting Grafana" version=10.4.2 commit=701c851be7a930e04fbc6ebb1cd4254da80edd4c branch=v10.4.x compiled=2024-04-18T23:14:21Z 23:16:52 grafana | logger=settings t=2024-04-18T23:14:21.948825262Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 23:16:52 grafana | logger=settings t=2024-04-18T23:14:21.948831762Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 23:16:52 grafana | logger=settings t=2024-04-18T23:14:21.948836412Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 23:16:52 grafana | logger=settings t=2024-04-18T23:14:21.948839403Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 23:16:52 grafana | logger=settings t=2024-04-18T23:14:21.948842043Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 23:16:52 grafana | logger=settings t=2024-04-18T23:14:21.948844843Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 23:16:52 grafana | logger=settings t=2024-04-18T23:14:21.948847703Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 23:16:52 grafana | logger=settings t=2024-04-18T23:14:21.948869924Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 23:16:52 grafana | logger=settings t=2024-04-18T23:14:21.948874775Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 23:16:52 grafana | logger=settings t=2024-04-18T23:14:21.948877955Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 23:16:52 grafana | logger=settings t=2024-04-18T23:14:21.948880905Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 23:16:52 grafana | logger=settings t=2024-04-18T23:14:21.948884275Z level=info msg=Target target=[all] 23:16:52 grafana | logger=settings t=2024-04-18T23:14:21.948895036Z level=info msg="Path Home" path=/usr/share/grafana 23:16:52 grafana | logger=settings t=2024-04-18T23:14:21.948899316Z level=info msg="Path Data" path=/var/lib/grafana 23:16:52 grafana | logger=settings t=2024-04-18T23:14:21.948902576Z level=info msg="Path Logs" path=/var/log/grafana 23:16:52 grafana | logger=settings t=2024-04-18T23:14:21.948905786Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 23:16:52 grafana | logger=settings t=2024-04-18T23:14:21.948909897Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 23:16:52 grafana | logger=settings t=2024-04-18T23:14:21.948917917Z level=info msg="App mode production" 23:16:52 grafana | logger=sqlstore t=2024-04-18T23:14:21.949205683Z level=info msg="Connecting to DB" dbtype=sqlite3 23:16:52 grafana | logger=sqlstore t=2024-04-18T23:14:21.949225784Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:21.949846288Z level=info msg="Starting DB migrations" 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:21.950749728Z level=info msg="Executing migration" id="create migration_log table" 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:21.951558762Z level=info msg="Migration successfully executed" id="create migration_log table" duration=808.674µs 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:21.955916922Z level=info msg="Executing migration" id="create user table" 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:21.956919867Z level=info msg="Migration successfully executed" id="create user table" duration=1.002735ms 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:21.962104352Z level=info msg="Executing migration" id="add unique index user.login" 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:21.963251635Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.143183ms 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:21.968471732Z level=info msg="Executing migration" id="add unique index user.email" 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:21.969632935Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.160124ms 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:21.981243163Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:21.9822732Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.029017ms 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:21.988995719Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:21.989702908Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=706.879µs 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:21.993655185Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:21.996227497Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.571832ms 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.000743775Z level=info msg="Executing migration" id="create user table v2" 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.0015625Z level=info msg="Migration successfully executed" id="create user table v2" duration=818.555µs 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.007260372Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.008342346Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.081474ms 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.016248729Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.017380795Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.139216ms 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.023295894Z level=info msg="Executing migration" id="copy data_source v1 to v2" 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.023785842Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=488.708µs 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.027546268Z level=info msg="Executing migration" id="Drop old table user_v1" 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.028008644Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=462.307µs 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.035502214Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.03718292Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.680246ms 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.050798101Z level=info msg="Executing migration" id="Update user table charset" 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.050846373Z level=info msg="Migration successfully executed" id="Update user table charset" duration=49.432µs 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.057483124Z level=info msg="Executing migration" id="Add last_seen_at column to user" 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.058658481Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.175107ms 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.063394383Z level=info msg="Executing migration" id="Add missing user data" 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.063642507Z level=info msg="Migration successfully executed" id="Add missing user data" duration=252.374µs 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.068087322Z level=info msg="Executing migration" id="Add is_disabled column to user" 23:16:52 kafka | ===> User 23:16:52 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:52 kafka | ===> Configuring ... 23:16:52 kafka | Running in Zookeeper mode... 23:16:52 kafka | ===> Running preflight checks ... 23:16:52 kafka | ===> Check if /var/lib/kafka/data is writable ... 23:16:52 kafka | ===> Check if Zookeeper is healthy ... 23:16:52 kafka | [2024-04-18 23:14:27,292] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:27,293] INFO Client environment:host.name=9fd3e2c8b405 (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:27,293] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:27,293] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:27,293] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:27,293] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.1-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.6.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.1-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.1-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.6.1.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.1.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.1-ccs.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.1-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.1-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:27,293] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:27,293] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:27,293] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:27,293] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:27,293] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:27,293] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:27,293] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:27,293] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:27,293] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:27,293] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:27,293] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:27,294] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:27,296] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@b7f23d9 (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:27,299] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:52 kafka | [2024-04-18 23:14:27,303] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:52 kafka | [2024-04-18 23:14:27,310] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:52 kafka | [2024-04-18 23:14:27,326] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) 23:16:52 kafka | [2024-04-18 23:14:27,327] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 23:16:52 kafka | [2024-04-18 23:14:27,335] INFO Socket connection established, initiating session, client: /172.17.0.9:44464, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) 23:16:52 kafka | [2024-04-18 23:14:27,372] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x1000003d8b60000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 23:16:52 kafka | [2024-04-18 23:14:27,492] INFO Session: 0x1000003d8b60000 closed (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:27,492] INFO EventThread shut down for session: 0x1000003d8b60000 (org.apache.zookeeper.ClientCnxn) 23:16:52 kafka | Using log4j config /etc/kafka/log4j.properties 23:16:52 kafka | ===> Launching ... 23:16:52 kafka | ===> Launching kafka ... 23:16:52 kafka | [2024-04-18 23:14:28,190] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 23:16:52 kafka | [2024-04-18 23:14:28,511] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:52 kafka | [2024-04-18 23:14:28,579] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 23:16:52 kafka | [2024-04-18 23:14:28,580] INFO starting (kafka.server.KafkaServer) 23:16:52 kafka | [2024-04-18 23:14:28,580] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 23:16:52 kafka | [2024-04-18 23:14:28,592] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 23:16:52 kafka | [2024-04-18 23:14:28,596] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:28,596] INFO Client environment:host.name=9fd3e2c8b405 (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:28,596] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:28,596] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:28,596] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:28,596] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:28,596] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:28,596] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:28,596] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:28,596] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:28,596] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:28,596] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:28,596] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:28,596] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:28,596] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:28,596] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:28,596] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:28,597] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:28,598] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@66746f57 (org.apache.zookeeper.ZooKeeper) 23:16:52 kafka | [2024-04-18 23:14:28,602] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:52 kafka | [2024-04-18 23:14:28,608] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:52 kafka | [2024-04-18 23:14:28,609] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 23:16:52 kafka | [2024-04-18 23:14:28,614] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) 23:16:52 kafka | [2024-04-18 23:14:28,623] INFO Socket connection established, initiating session, client: /172.17.0.9:44466, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) 23:16:52 kafka | [2024-04-18 23:14:28,635] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x1000003d8b60001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 23:16:52 kafka | [2024-04-18 23:14:28,639] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 23:16:52 kafka | [2024-04-18 23:14:28,941] INFO Cluster ID = 3CcxO9QMSqWFRVbl82UfdQ (kafka.server.KafkaServer) 23:16:52 kafka | [2024-04-18 23:14:28,946] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 23:16:52 kafka | [2024-04-18 23:14:28,997] INFO KafkaConfig values: 23:16:52 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 23:16:52 kafka | alter.config.policy.class.name = null 23:16:52 kafka | alter.log.dirs.replication.quota.window.num = 11 23:16:52 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 23:16:52 kafka | authorizer.class.name = 23:16:52 kafka | auto.create.topics.enable = true 23:16:52 kafka | auto.include.jmx.reporter = true 23:16:52 kafka | auto.leader.rebalance.enable = true 23:16:52 kafka | background.threads = 10 23:16:52 kafka | broker.heartbeat.interval.ms = 2000 23:16:52 kafka | broker.id = 1 23:16:52 kafka | broker.id.generation.enable = true 23:16:52 kafka | broker.rack = null 23:16:52 kafka | broker.session.timeout.ms = 9000 23:16:52 kafka | client.quota.callback.class = null 23:16:52 kafka | compression.type = producer 23:16:52 kafka | connection.failed.authentication.delay.ms = 100 23:16:52 kafka | connections.max.idle.ms = 600000 23:16:52 kafka | connections.max.reauth.ms = 0 23:16:52 kafka | control.plane.listener.name = null 23:16:52 kafka | controlled.shutdown.enable = true 23:16:52 kafka | controlled.shutdown.max.retries = 3 23:16:52 kafka | controlled.shutdown.retry.backoff.ms = 5000 23:16:52 kafka | controller.listener.names = null 23:16:52 kafka | controller.quorum.append.linger.ms = 25 23:16:52 kafka | controller.quorum.election.backoff.max.ms = 1000 23:16:52 kafka | controller.quorum.election.timeout.ms = 1000 23:16:52 kafka | controller.quorum.fetch.timeout.ms = 2000 23:16:52 kafka | controller.quorum.request.timeout.ms = 2000 23:16:52 kafka | controller.quorum.retry.backoff.ms = 20 23:16:52 kafka | controller.quorum.voters = [] 23:16:52 kafka | controller.quota.window.num = 11 23:16:52 kafka | controller.quota.window.size.seconds = 1 23:16:52 kafka | controller.socket.timeout.ms = 30000 23:16:52 kafka | create.topic.policy.class.name = null 23:16:52 kafka | default.replication.factor = 1 23:16:52 kafka | delegation.token.expiry.check.interval.ms = 3600000 23:16:52 kafka | delegation.token.expiry.time.ms = 86400000 23:16:52 kafka | delegation.token.master.key = null 23:16:52 kafka | delegation.token.max.lifetime.ms = 604800000 23:16:52 kafka | delegation.token.secret.key = null 23:16:52 kafka | delete.records.purgatory.purge.interval.requests = 1 23:16:52 kafka | delete.topic.enable = true 23:16:52 kafka | early.start.listeners = null 23:16:52 kafka | fetch.max.bytes = 57671680 23:16:52 kafka | fetch.purgatory.purge.interval.requests = 1000 23:16:52 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 23:16:52 kafka | group.consumer.heartbeat.interval.ms = 5000 23:16:52 kafka | group.consumer.max.heartbeat.interval.ms = 15000 23:16:52 kafka | group.consumer.max.session.timeout.ms = 60000 23:16:52 kafka | group.consumer.max.size = 2147483647 23:16:52 kafka | group.consumer.min.heartbeat.interval.ms = 5000 23:16:52 kafka | group.consumer.min.session.timeout.ms = 45000 23:16:52 kafka | group.consumer.session.timeout.ms = 45000 23:16:52 kafka | group.coordinator.new.enable = false 23:16:52 kafka | group.coordinator.threads = 1 23:16:52 kafka | group.initial.rebalance.delay.ms = 3000 23:16:52 kafka | group.max.session.timeout.ms = 1800000 23:16:52 kafka | group.max.size = 2147483647 23:16:52 kafka | group.min.session.timeout.ms = 6000 23:16:52 kafka | initial.broker.registration.timeout.ms = 60000 23:16:52 kafka | inter.broker.listener.name = PLAINTEXT 23:16:52 kafka | inter.broker.protocol.version = 3.6-IV2 23:16:52 kafka | kafka.metrics.polling.interval.secs = 10 23:16:52 kafka | kafka.metrics.reporters = [] 23:16:52 kafka | leader.imbalance.check.interval.seconds = 300 23:16:52 kafka | leader.imbalance.per.broker.percentage = 10 23:16:52 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 23:16:52 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 23:16:52 kafka | log.cleaner.backoff.ms = 15000 23:16:52 kafka | log.cleaner.dedupe.buffer.size = 134217728 23:16:52 kafka | log.cleaner.delete.retention.ms = 86400000 23:16:52 kafka | log.cleaner.enable = true 23:16:52 kafka | log.cleaner.io.buffer.load.factor = 0.9 23:16:52 kafka | log.cleaner.io.buffer.size = 524288 23:16:52 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 23:16:52 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 23:16:52 kafka | log.cleaner.min.cleanable.ratio = 0.5 23:16:52 kafka | log.cleaner.min.compaction.lag.ms = 0 23:16:52 kafka | log.cleaner.threads = 1 23:16:52 kafka | log.cleanup.policy = [delete] 23:16:52 kafka | log.dir = /tmp/kafka-logs 23:16:52 kafka | log.dirs = /var/lib/kafka/data 23:16:52 kafka | log.flush.interval.messages = 9223372036854775807 23:16:52 kafka | log.flush.interval.ms = null 23:16:52 kafka | log.flush.offset.checkpoint.interval.ms = 60000 23:16:52 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 23:16:52 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 23:16:52 kafka | log.index.interval.bytes = 4096 23:16:52 kafka | log.index.size.max.bytes = 10485760 23:16:52 kafka | log.local.retention.bytes = -2 23:16:52 kafka | log.local.retention.ms = -2 23:16:52 kafka | log.message.downconversion.enable = true 23:16:52 kafka | log.message.format.version = 3.0-IV1 23:16:52 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 23:16:52 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 23:16:52 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 23:16:52 kafka | log.message.timestamp.type = CreateTime 23:16:52 kafka | log.preallocate = false 23:16:52 kafka | log.retention.bytes = -1 23:16:52 kafka | log.retention.check.interval.ms = 300000 23:16:52 kafka | log.retention.hours = 168 23:16:52 kafka | log.retention.minutes = null 23:16:52 kafka | log.retention.ms = null 23:16:52 kafka | log.roll.hours = 168 23:16:52 kafka | log.roll.jitter.hours = 0 23:16:52 kafka | log.roll.jitter.ms = null 23:16:52 kafka | log.roll.ms = null 23:16:52 kafka | log.segment.bytes = 1073741824 23:16:52 kafka | log.segment.delete.delay.ms = 60000 23:16:52 kafka | max.connection.creation.rate = 2147483647 23:16:52 kafka | max.connections = 2147483647 23:16:52 kafka | max.connections.per.ip = 2147483647 23:16:52 kafka | max.connections.per.ip.overrides = 23:16:52 kafka | max.incremental.fetch.session.cache.slots = 1000 23:16:52 kafka | message.max.bytes = 1048588 23:16:52 kafka | metadata.log.dir = null 23:16:52 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 23:16:52 kafka | metadata.log.max.snapshot.interval.ms = 3600000 23:16:52 kafka | metadata.log.segment.bytes = 1073741824 23:16:52 kafka | metadata.log.segment.min.bytes = 8388608 23:16:52 kafka | metadata.log.segment.ms = 604800000 23:16:52 kafka | metadata.max.idle.interval.ms = 500 23:16:52 kafka | metadata.max.retention.bytes = 104857600 23:16:52 kafka | metadata.max.retention.ms = 604800000 23:16:52 kafka | metric.reporters = [] 23:16:52 kafka | metrics.num.samples = 2 23:16:52 kafka | metrics.recording.level = INFO 23:16:52 kafka | metrics.sample.window.ms = 30000 23:16:52 kafka | min.insync.replicas = 1 23:16:52 kafka | node.id = 1 23:16:52 kafka | num.io.threads = 8 23:16:52 kafka | num.network.threads = 3 23:16:52 kafka | num.partitions = 1 23:16:52 kafka | num.recovery.threads.per.data.dir = 1 23:16:52 kafka | num.replica.alter.log.dirs.threads = null 23:16:52 kafka | num.replica.fetchers = 1 23:16:52 kafka | offset.metadata.max.bytes = 4096 23:16:52 kafka | offsets.commit.required.acks = -1 23:16:52 kafka | offsets.commit.timeout.ms = 5000 23:16:52 kafka | offsets.load.buffer.size = 5242880 23:16:52 kafka | offsets.retention.check.interval.ms = 600000 23:16:52 kafka | offsets.retention.minutes = 10080 23:16:52 kafka | offsets.topic.compression.codec = 0 23:16:52 kafka | offsets.topic.num.partitions = 50 23:16:52 kafka | offsets.topic.replication.factor = 1 23:16:52 kafka | offsets.topic.segment.bytes = 104857600 23:16:52 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 23:16:52 kafka | password.encoder.iterations = 4096 23:16:52 kafka | password.encoder.key.length = 128 23:16:52 kafka | password.encoder.keyfactory.algorithm = null 23:16:52 kafka | password.encoder.old.secret = null 23:16:52 kafka | password.encoder.secret = null 23:16:52 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 23:16:52 kafka | process.roles = [] 23:16:52 kafka | producer.id.expiration.check.interval.ms = 600000 23:16:52 kafka | producer.id.expiration.ms = 86400000 23:16:52 kafka | producer.purgatory.purge.interval.requests = 1000 23:16:52 kafka | queued.max.request.bytes = -1 23:16:52 kafka | queued.max.requests = 500 23:16:52 mariadb | 2024-04-18 23:14:16+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:52 policy-apex-pdp | Waiting for mariadb port 3306... 23:16:52 kafka | quota.window.num = 11 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.069335684Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.247791ms 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.074188232Z level=info msg="Executing migration" id="Add index user.login/user.email" 23:16:52 policy-apex-pdp | mariadb (172.17.0.2:3306) open 23:16:52 policy-db-migrator | Waiting for mariadb port 3306... 23:16:52 kafka | quota.window.size.seconds = 1 23:16:52 policy-api | Waiting for mariadb port 3306... 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.075167158Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=977.516µs 23:16:52 mariadb | 2024-04-18 23:14:16+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 23:16:52 policy-apex-pdp | Waiting for kafka port 9092... 23:16:52 policy-pap | Waiting for mariadb port 3306... 23:16:52 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 23:16:52 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 23:16:52 policy-api | mariadb (172.17.0.2:3306) open 23:16:52 policy-api | Waiting for policy-db-migrator port 6824... 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.079366649Z level=info msg="Executing migration" id="Add is_service_account column to user" 23:16:52 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:52 zookeeper | ===> User 23:16:52 mariadb | 2024-04-18 23:14:16+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:52 policy-apex-pdp | kafka (172.17.0.9:9092) open 23:16:52 policy-pap | mariadb (172.17.0.2:3306) open 23:16:52 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 23:16:52 kafka | remote.log.manager.task.interval.ms = 30000 23:16:52 policy-api | policy-db-migrator (172.17.0.6:6824) open 23:16:52 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.081153001Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.785602ms 23:16:52 simulator | overriding logback.xml 23:16:52 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:52 mariadb | 2024-04-18 23:14:16+00:00 [Note] [Entrypoint]: Initializing database files 23:16:52 policy-apex-pdp | Waiting for pap port 6969... 23:16:52 policy-pap | Waiting for kafka port 9092... 23:16:52 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 23:16:52 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 23:16:52 policy-api | 23:16:52 policy-api | . ____ _ __ _ _ 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.089298648Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 23:16:52 simulator | 2024-04-18 23:14:19,090 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:52 zookeeper | ===> Configuring ... 23:16:52 mariadb | 2024-04-18 23:14:17 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:52 policy-pap | kafka (172.17.0.9:9092) open 23:16:52 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 23:16:52 kafka | remote.log.manager.task.retry.backoff.ms = 500 23:16:52 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:52 prometheus | ts=2024-04-18T23:14:19.736Z caller=main.go:573 level=info msg="No time or size retention was set so using the default time retention" duration=15d 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.102495885Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=13.197527ms 23:16:52 simulator | 2024-04-18 23:14:19,158 INFO org.onap.policy.models.simulators starting 23:16:52 zookeeper | ===> Running preflight checks ... 23:16:52 mariadb | 2024-04-18 23:14:17 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:52 policy-pap | Waiting for api port 6969... 23:16:52 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 23:16:52 kafka | remote.log.manager.task.retry.jitter = 0.2 23:16:52 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:52 prometheus | ts=2024-04-18T23:14:19.736Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.2, branch=HEAD, revision=b4c0ab52c3e9b940ab803581ddae9b3d9a452337)" 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.114738997Z level=info msg="Executing migration" id="Add uid column to user" 23:16:52 simulator | 2024-04-18 23:14:19,159 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 23:16:52 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 23:16:52 mariadb | 2024-04-18 23:14:17 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:52 policy-pap | api (172.17.0.7:6969) open 23:16:52 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 23:16:52 kafka | remote.log.manager.thread.pool.size = 10 23:16:52 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:52 prometheus | ts=2024-04-18T23:14:19.736Z caller=main.go:622 level=info build_context="(go=go1.22.2, platform=linux/amd64, user=root@b63f02a423d9, date=20240410-14:05:54, tags=netgo,builtinassets,stringlabels)" 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.116500048Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.760911ms 23:16:52 simulator | 2024-04-18 23:14:19,346 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 23:16:52 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 23:16:52 mariadb | 23:16:52 policy-db-migrator | Connection to mariadb (172.17.0.2) 3306 port [tcp/mysql] succeeded! 23:16:52 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:52 prometheus | ts=2024-04-18T23:14:19.736Z caller=main.go:623 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.123789406Z level=info msg="Executing migration" id="Update uid column values for users" 23:16:52 simulator | 2024-04-18 23:14:19,347 INFO org.onap.policy.models.simulators starting A&AI simulator 23:16:52 zookeeper | ===> Launching ... 23:16:52 mariadb | 23:16:52 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 23:16:52 mariadb | To do so, start the server, then issue the following command: 23:16:52 policy-apex-pdp | pap (172.17.0.10:6969) open 23:16:52 policy-db-migrator | 321 blocks 23:16:52 policy-api | =========|_|==============|___/=/_/_/_/ 23:16:52 prometheus | ts=2024-04-18T23:14:19.736Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.123964766Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=175.39µs 23:16:52 simulator | 2024-04-18 23:14:19,447 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:52 zookeeper | ===> Launching zookeeper ... 23:16:52 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 23:16:52 mariadb | 23:16:52 mariadb | '/usr/bin/mysql_secure_installation' 23:16:52 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 23:16:52 policy-db-migrator | Preparing upgrade release version: 0800 23:16:52 policy-api | :: Spring Boot :: (v3.1.10) 23:16:52 prometheus | ts=2024-04-18T23:14:19.736Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.128243371Z level=info msg="Executing migration" id="Add unique index user_uid" 23:16:52 simulator | 2024-04-18 23:14:19,457 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:52 zookeeper | [2024-04-18 23:14:24,543] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:52 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 23:16:52 mariadb | 23:16:52 mariadb | which will also give you the option of removing the test 23:16:52 policy-apex-pdp | [2024-04-18T23:14:55.895+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.042+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:52 policy-api | 23:16:52 prometheus | ts=2024-04-18T23:14:19.742Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.129077449Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=828.578µs 23:16:52 simulator | 2024-04-18 23:14:19,460 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:52 zookeeper | [2024-04-18 23:14:24,551] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:52 policy-pap | 23:16:52 mariadb | databases and anonymous user created by default. This is 23:16:52 mariadb | strongly recommended for production servers. 23:16:52 policy-db-migrator | Preparing upgrade release version: 0900 23:16:52 policy-apex-pdp | allow.auto.create.topics = true 23:16:52 policy-api | [2024-04-18T23:14:31.694+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 23:16:52 prometheus | ts=2024-04-18T23:14:19.743Z caller=main.go:1129 level=info msg="Starting TSDB ..." 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.134080056Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 23:16:52 simulator | 2024-04-18 23:14:19,464 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:52 zookeeper | [2024-04-18 23:14:24,551] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:52 policy-pap | . ____ _ __ _ _ 23:16:52 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 23:16:52 mariadb | 23:16:52 policy-db-migrator | Preparing upgrade release version: 1000 23:16:52 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:52 policy-api | [2024-04-18T23:14:31.759+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 23 (/app/api.jar started by policy in /opt/app/policy/api/bin) 23:16:52 prometheus | ts=2024-04-18T23:14:19.744Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.134595925Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=509.149µs 23:16:52 simulator | 2024-04-18 23:14:19,523 INFO Session workerName=node0 23:16:52 zookeeper | [2024-04-18 23:14:24,551] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:52 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:52 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 23:16:52 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 23:16:52 policy-db-migrator | Preparing upgrade release version: 1100 23:16:52 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:52 policy-api | [2024-04-18T23:14:31.760+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:52 prometheus | ts=2024-04-18T23:14:19.744Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.138257395Z level=info msg="Executing migration" id="create temp user table v1-7" 23:16:52 simulator | 2024-04-18 23:14:20,080 INFO Using GSON for REST calls 23:16:52 zookeeper | [2024-04-18 23:14:24,551] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:52 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:52 kafka | remote.log.metadata.manager.class.path = null 23:16:52 mariadb | 23:16:52 policy-db-migrator | Preparing upgrade release version: 1200 23:16:52 policy-apex-pdp | auto.offset.reset = latest 23:16:52 policy-api | [2024-04-18T23:14:33.688+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:52 prometheus | ts=2024-04-18T23:14:19.747Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.139230691Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=973.006µs 23:16:52 simulator | 2024-04-18 23:14:20,173 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} 23:16:52 zookeeper | [2024-04-18 23:14:24,553] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:52 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:52 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 23:16:52 mariadb | Please report any problems at https://mariadb.org/jira 23:16:52 policy-db-migrator | Preparing upgrade release version: 1300 23:16:52 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:52 policy-api | [2024-04-18T23:14:33.770+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 72 ms. Found 6 JPA repository interfaces. 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.143622483Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 23:16:52 simulator | 2024-04-18 23:14:20,182 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 23:16:52 zookeeper | [2024-04-18 23:14:24,553] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:52 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:52 kafka | remote.log.metadata.manager.listener.name = null 23:16:52 mariadb | 23:16:52 policy-db-migrator | Done 23:16:52 policy-apex-pdp | check.crcs = true 23:16:52 prometheus | ts=2024-04-18T23:14:19.748Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.911µs 23:16:52 policy-api | [2024-04-18T23:14:34.219+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.144706445Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.083522ms 23:16:52 simulator | 2024-04-18 23:14:20,191 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1552ms 23:16:52 zookeeper | [2024-04-18 23:14:24,553] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:52 policy-pap | =========|_|==============|___/=/_/_/_/ 23:16:52 kafka | remote.log.reader.max.pending.tasks = 100 23:16:52 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 23:16:52 policy-db-migrator | name version 23:16:52 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:52 prometheus | ts=2024-04-18T23:14:19.748Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 23:16:52 policy-api | [2024-04-18T23:14:34.220+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.152136951Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 23:16:52 simulator | 2024-04-18 23:14:20,192 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4268 ms. 23:16:52 zookeeper | [2024-04-18 23:14:24,553] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 23:16:52 policy-pap | :: Spring Boot :: (v3.1.10) 23:16:52 kafka | remote.log.reader.threads = 10 23:16:52 mariadb | 23:16:52 policy-db-migrator | policyadmin 0 23:16:52 policy-apex-pdp | client.id = consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-1 23:16:52 prometheus | ts=2024-04-18T23:14:19.753Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 23:16:52 policy-api | [2024-04-18T23:14:34.846+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.154549889Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=2.414878ms 23:16:52 simulator | 2024-04-18 23:14:20,197 INFO org.onap.policy.models.simulators starting SDNC simulator 23:16:52 zookeeper | [2024-04-18 23:14:24,554] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 23:16:52 policy-pap | 23:16:52 kafka | remote.log.storage.manager.class.name = null 23:16:52 mariadb | Consider joining MariaDB's strong and vibrant community: 23:16:52 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 23:16:52 policy-apex-pdp | client.rack = 23:16:52 prometheus | ts=2024-04-18T23:14:19.753Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=138.018µs wal_replay_duration=4.941193ms wbl_replay_duration=310ns total_replay_duration=5.106483ms 23:16:52 policy-api | [2024-04-18T23:14:34.857+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.160077106Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 23:16:52 simulator | 2024-04-18 23:14:20,199 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:52 zookeeper | [2024-04-18 23:14:24,555] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:52 policy-pap | [2024-04-18T23:14:44.945+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 23:16:52 kafka | remote.log.storage.manager.class.path = null 23:16:52 mariadb | https://mariadb.org/get-involved/ 23:16:52 policy-db-migrator | upgrade: 0 -> 1300 23:16:52 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:52 prometheus | ts=2024-04-18T23:14:19.756Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 23:16:52 policy-api | [2024-04-18T23:14:34.859+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.161509699Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.431382ms 23:16:52 simulator | 2024-04-18 23:14:20,200 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:52 zookeeper | [2024-04-18 23:14:24,555] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:52 policy-pap | [2024-04-18T23:14:44.997+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 29 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 23:16:52 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 23:16:52 mariadb | 23:16:52 policy-db-migrator | 23:16:52 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:52 prometheus | ts=2024-04-18T23:14:19.756Z caller=main.go:1153 level=info msg="TSDB started" 23:16:52 policy-api | [2024-04-18T23:14:34.859+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.16675983Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 23:16:52 simulator | 2024-04-18 23:14:20,200 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:52 zookeeper | [2024-04-18 23:14:24,555] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:52 policy-pap | [2024-04-18T23:14:44.998+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:52 kafka | remote.log.storage.system.enable = false 23:16:52 mariadb | 2024-04-18 23:14:18+00:00 [Note] [Entrypoint]: Database files initialized 23:16:52 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 23:16:52 policy-apex-pdp | enable.auto.commit = true 23:16:52 prometheus | ts=2024-04-18T23:14:19.756Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 23:16:52 policy-api | [2024-04-18T23:14:34.959+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.167535254Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=775.625µs 23:16:52 simulator | 2024-04-18 23:14:20,201 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:52 zookeeper | [2024-04-18 23:14:24,555] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:52 policy-pap | [2024-04-18T23:14:47.035+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:52 kafka | replica.fetch.backoff.ms = 1000 23:16:52 mariadb | 2024-04-18 23:14:18+00:00 [Note] [Entrypoint]: Starting temporary server 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | exclude.internal.topics = true 23:16:52 prometheus | ts=2024-04-18T23:14:19.757Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=853.828µs db_storage=1.751µs remote_storage=2.35µs web_handler=710ns query_engine=840ns scrape=250.214µs scrape_sd=137.677µs notify=23.822µs notify_sd=8.09µs rules=2.4µs tracing=5.571µs 23:16:52 policy-api | [2024-04-18T23:14:34.960+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3124 ms 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.171989869Z level=info msg="Executing migration" id="Update temp_user table charset" 23:16:52 simulator | 2024-04-18 23:14:20,212 INFO Session workerName=node0 23:16:52 zookeeper | [2024-04-18 23:14:24,555] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:52 policy-pap | [2024-04-18T23:14:47.151+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 106 ms. Found 7 JPA repository interfaces. 23:16:52 kafka | replica.fetch.max.bytes = 1048576 23:16:52 mariadb | 2024-04-18 23:14:18+00:00 [Note] [Entrypoint]: Waiting for server startup 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:52 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:52 prometheus | ts=2024-04-18T23:14:19.757Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 23:16:52 policy-api | [2024-04-18T23:14:35.395+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.172036782Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=53.563µs 23:16:52 simulator | 2024-04-18 23:14:20,279 INFO Using GSON for REST calls 23:16:52 zookeeper | [2024-04-18 23:14:24,555] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 23:16:52 policy-pap | [2024-04-18T23:14:47.623+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:52 kafka | replica.fetch.min.bytes = 1 23:16:52 mariadb | 2024-04-18 23:14:18 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 101 ... 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:52 prometheus | ts=2024-04-18T23:14:19.757Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 23:16:52 policy-api | [2024-04-18T23:14:35.467+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.181576189Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 23:16:52 simulator | 2024-04-18 23:14:20,290 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} 23:16:52 zookeeper | [2024-04-18 23:14:24,569] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@77eca502 (org.apache.zookeeper.server.ServerMetrics) 23:16:52 policy-pap | [2024-04-18T23:14:47.624+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:52 kafka | replica.fetch.response.max.bytes = 10485760 23:16:52 mariadb | 2024-04-18 23:14:18 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:52 policy-db-migrator | 23:16:52 policy-apex-pdp | fetch.min.bytes = 1 23:16:52 policy-api | [2024-04-18T23:14:35.513+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.182907385Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.363188ms 23:16:52 simulator | 2024-04-18 23:14:20,291 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 23:16:52 zookeeper | [2024-04-18 23:14:24,572] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:52 policy-pap | [2024-04-18T23:14:48.227+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:52 kafka | replica.fetch.wait.max.ms = 500 23:16:52 mariadb | 2024-04-18 23:14:18 0 [Note] InnoDB: Number of transaction pools: 1 23:16:52 policy-db-migrator | 23:16:52 policy-apex-pdp | group.id = dbe3acf0-ba50-4571-9b48-e58d24ad2dc5 23:16:52 policy-api | [2024-04-18T23:14:35.809+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.189284681Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 23:16:52 simulator | 2024-04-18 23:14:20,291 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @1652ms 23:16:52 zookeeper | [2024-04-18 23:14:24,572] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:52 policy-pap | [2024-04-18T23:14:48.237+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:52 mariadb | 2024-04-18 23:14:18 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:52 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 23:16:52 kafka | replica.lag.time.max.ms = 30000 23:16:52 policy-apex-pdp | group.instance.id = null 23:16:52 policy-api | [2024-04-18T23:14:35.839+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.190057875Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=773.244µs 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.193034056Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 23:16:52 simulator | 2024-04-18 23:14:20,291 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4909 ms. 23:16:52 policy-pap | [2024-04-18T23:14:48.239+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:52 mariadb | 2024-04-18 23:14:18 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:52 kafka | replica.selector.class = null 23:16:52 kafka | replica.socket.receive.buffer.bytes = 65536 23:16:52 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:52 policy-api | [2024-04-18T23:14:35.942+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@1f11f64e 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.193755057Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=722.461µs 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.198373472Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 23:16:52 simulator | 2024-04-18 23:14:20,292 INFO org.onap.policy.models.simulators starting SO simulator 23:16:52 policy-pap | [2024-04-18T23:14:48.239+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 23:16:52 mariadb | 2024-04-18 23:14:18 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:52 kafka | replica.socket.timeout.ms = 30000 23:16:52 kafka | replication.quota.window.num = 11 23:16:52 policy-apex-pdp | interceptor.classes = [] 23:16:52 policy-api | [2024-04-18T23:14:35.944+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.19904209Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=668.518µs 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.201392445Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 23:16:52 simulator | 2024-04-18 23:14:20,294 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:52 policy-pap | [2024-04-18T23:14:48.344+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:52 mariadb | 2024-04-18 23:14:18 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:52 kafka | replication.quota.window.size.seconds = 1 23:16:52 kafka | request.timeout.ms = 30000 23:16:52 policy-apex-pdp | internal.leave.group.on.close = true 23:16:52 policy-api | [2024-04-18T23:14:37.897+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.204673713Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.280168ms 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.207899928Z level=info msg="Executing migration" id="create temp_user v2" 23:16:52 simulator | 2024-04-18 23:14:20,295 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:52 policy-pap | [2024-04-18T23:14:48.344+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3275 ms 23:16:52 mariadb | 2024-04-18 23:14:18 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:52 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:52 policy-api | [2024-04-18T23:14:37.900+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.208754317Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=854.529µs 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.216193754Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 23:16:52 simulator | 2024-04-18 23:14:20,295 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:52 policy-pap | [2024-04-18T23:14:48.737+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:52 mariadb | 2024-04-18 23:14:18 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | isolation.level = read_uncommitted 23:16:52 policy-api | [2024-04-18T23:14:38.908+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.217475347Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.270963ms 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.22467172Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 23:16:52 simulator | 2024-04-18 23:14:20,296 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:52 policy-pap | [2024-04-18T23:14:48.789+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final 23:16:52 mariadb | 2024-04-18 23:14:18 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:52 policy-db-migrator | 23:16:52 policy-db-migrator | 23:16:52 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:52 policy-api | [2024-04-18T23:14:39.758+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.225941223Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.269713ms 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.232975336Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 23:16:52 simulator | 2024-04-18 23:14:20,302 INFO Session workerName=node0 23:16:52 policy-pap | [2024-04-18T23:14:49.143+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:52 mariadb | 2024-04-18 23:14:18 0 [Note] InnoDB: 128 rollback segments are active. 23:16:52 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:52 policy-api | [2024-04-18T23:14:40.909+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.234200596Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.22508ms 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.238680463Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 23:16:52 simulator | 2024-04-18 23:14:20,359 INFO Using GSON for REST calls 23:16:52 policy-pap | [2024-04-18T23:14:49.238+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@14982a82 23:16:52 mariadb | 2024-04-18 23:14:18 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:52 policy-api | [2024-04-18T23:14:41.117+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@2e5f860b, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@42ca6733, org.springframework.security.web.context.SecurityContextHolderFilter@16d52e51, org.springframework.security.web.header.HeaderWriterFilter@6d5934f6, org.springframework.security.web.authentication.logout.LogoutFilter@59ea4ca5, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@2489ee11, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@2f643a10, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@315a9738, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@452d71e5, org.springframework.security.web.access.ExceptionTranslationFilter@46756a5b, org.springframework.security.web.access.intercept.AuthorizationFilter@1c537671] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.239877192Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.195169ms 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.244527548Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 23:16:52 simulator | 2024-04-18 23:14:20,371 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} 23:16:52 policy-pap | [2024-04-18T23:14:49.240+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:52 mariadb | 2024-04-18 23:14:18 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:52 policy-db-migrator | 23:16:52 policy-db-migrator | 23:16:52 policy-apex-pdp | max.poll.records = 500 23:16:52 policy-api | [2024-04-18T23:14:41.965+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.245172715Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=644.807µs 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.248244292Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 23:16:52 simulator | 2024-04-18 23:14:20,372 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 23:16:52 policy-pap | [2024-04-18T23:14:49.269+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect 23:16:52 mariadb | 2024-04-18 23:14:18 0 [Note] InnoDB: log sequence number 46590; transaction id 14 23:16:52 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:52 policy-api | [2024-04-18T23:14:42.065+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.248758671Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=514.21µs 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.251556942Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 23:16:52 simulator | 2024-04-18 23:14:20,372 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @1734ms 23:16:52 policy-pap | [2024-04-18T23:14:50.759+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] 23:16:52 mariadb | 2024-04-18 23:14:18 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | metric.reporters = [] 23:16:52 policy-api | [2024-04-18T23:14:42.111+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.251969345Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=412.514µs 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.256382328Z level=info msg="Executing migration" id="create star table" 23:16:52 simulator | 2024-04-18 23:14:20,372 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4923 ms. 23:16:52 policy-pap | [2024-04-18T23:14:50.770+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:52 mariadb | 2024-04-18 23:14:18 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:52 policy-db-migrator | 23:16:52 policy-db-migrator | 23:16:52 policy-apex-pdp | metrics.num.samples = 2 23:16:52 policy-api | [2024-04-18T23:14:42.129+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.119 seconds (process running for 11.74) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.257506703Z level=info msg="Migration successfully executed" id="create star table" duration=1.113524ms 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.260836004Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 23:16:52 simulator | 2024-04-18 23:14:20,373 INFO org.onap.policy.models.simulators starting VFC simulator 23:16:52 policy-pap | [2024-04-18T23:14:51.285+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 23:16:52 mariadb | 2024-04-18 23:14:18 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 23:16:52 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | metrics.recording.level = INFO 23:16:52 policy-api | [2024-04-18T23:14:58.655+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.26217279Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.335057ms 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.265063006Z level=info msg="Executing migration" id="create org table v1" 23:16:52 simulator | 2024-04-18 23:14:20,375 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:52 policy-pap | [2024-04-18T23:14:51.701+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 23:16:52 mariadb | 2024-04-18 23:14:18 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:52 policy-api | [2024-04-18T23:14:58.655+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.265863062Z level=info msg="Migration successfully executed" id="create org table v1" duration=799.456µs 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.270683988Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 23:16:52 simulator | 2024-04-18 23:14:20,376 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:52 policy-pap | [2024-04-18T23:14:51.843+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 23:16:52 mariadb | 2024-04-18 23:14:18 0 [Note] mariadbd: ready for connections. 23:16:52 policy-db-migrator | 23:16:52 policy-db-migrator | 23:16:52 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:52 policy-api | [2024-04-18T23:14:58.657+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.271598591Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=912.742µs 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.27508334Z level=info msg="Executing migration" id="create org_user table v1" 23:16:52 simulator | 2024-04-18 23:14:20,377 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:52 policy-pap | [2024-04-18T23:14:52.157+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:52 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 23:16:52 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:52 policy-api | [2024-04-18T23:14:59.023+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.276342363Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.257642ms 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.27961576Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 23:16:52 simulator | 2024-04-18 23:14:20,378 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:52 policy-pap | allow.auto.create.topics = true 23:16:52 mariadb | 2024-04-18 23:14:19+00:00 [Note] [Entrypoint]: Temporary server started. 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:52 policy-api | [] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.280796968Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.170487ms 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.284101337Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 23:16:52 simulator | 2024-04-18 23:14:20,385 INFO Session workerName=node0 23:16:52 policy-pap | auto.commit.interval.ms = 5000 23:16:52 mariadb | 2024-04-18 23:14:21+00:00 [Note] [Entrypoint]: Creating user policy_user 23:16:52 policy-db-migrator | 23:16:52 policy-db-migrator | 23:16:52 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.284893483Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=792.296µs 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.295529363Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 23:16:52 simulator | 2024-04-18 23:14:20,430 INFO Using GSON for REST calls 23:16:52 policy-pap | auto.include.jmx.reporter = true 23:16:52 mariadb | 2024-04-18 23:14:21+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 23:16:52 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | request.timeout.ms = 30000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.297289954Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.756421ms 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.299984858Z level=info msg="Executing migration" id="Update org table charset" 23:16:52 simulator | 2024-04-18 23:14:20,438 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} 23:16:52 policy-pap | auto.offset.reset = latest 23:16:52 mariadb | 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | retry.backoff.ms = 100 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.30001188Z level=info msg="Migration successfully executed" id="Update org table charset" duration=27.562µs 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.304569081Z level=info msg="Executing migration" id="Update org_user table charset" 23:16:52 simulator | 2024-04-18 23:14:20,439 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 23:16:52 policy-pap | bootstrap.servers = [kafka:9092] 23:16:52 mariadb | 23:16:52 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.304593742Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=29.391µs 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.307233154Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 23:16:52 simulator | 2024-04-18 23:14:20,440 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @1801ms 23:16:52 policy-pap | check.crcs = true 23:16:52 mariadb | 2024-04-18 23:14:21+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 23:16:52 policy-apex-pdp | sasl.jaas.config = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.30751726Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=285.006µs 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.311657507Z level=info msg="Executing migration" id="create dashboard table" 23:16:52 simulator | 2024-04-18 23:14:20,440 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4937 ms. 23:16:52 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:52 mariadb | 2024-04-18 23:14:21+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 23:16:52 kafka | reserved.broker.max.id = 1000 23:16:52 kafka | sasl.client.callback.handler.class = null 23:16:52 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:52 zookeeper | [2024-04-18 23:14:24,577] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.312997294Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.342237ms 23:16:52 simulator | 2024-04-18 23:14:20,441 INFO org.onap.policy.models.simulators started 23:16:52 policy-pap | client.id = consumer-deefd98f-1600-442c-a15a-d2ceba267151-1 23:16:52 mariadb | #!/bin/bash -xv 23:16:52 kafka | sasl.enabled.mechanisms = [GSSAPI] 23:16:52 kafka | sasl.jaas.config = null 23:16:52 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:52 zookeeper | [2024-04-18 23:14:24,590] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.315843987Z level=info msg="Executing migration" id="add index dashboard.account_id" 23:16:52 policy-pap | client.rack = 23:16:52 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 23:16:52 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:52 kafka | sasl.kerberos.min.time.before.relogin = 60000 23:16:52 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:52 zookeeper | [2024-04-18 23:14:24,590] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.316596561Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=752.584µs 23:16:52 policy-pap | connections.max.idle.ms = 540000 23:16:52 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 23:16:52 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 23:16:52 kafka | sasl.kerberos.service.name = null 23:16:52 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:52 zookeeper | [2024-04-18 23:14:24,590] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.319234322Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 23:16:52 policy-pap | default.api.timeout.ms = 60000 23:16:52 mariadb | # 23:16:52 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:52 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:52 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:52 zookeeper | [2024-04-18 23:14:24,590] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.320119853Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=885.651µs 23:16:52 policy-pap | enable.auto.commit = true 23:16:52 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 23:16:52 kafka | sasl.login.callback.handler.class = null 23:16:52 kafka | sasl.login.class = null 23:16:52 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:52 zookeeper | [2024-04-18 23:14:24,590] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.322880321Z level=info msg="Executing migration" id="create dashboard_tag table" 23:16:52 policy-pap | exclude.internal.topics = true 23:16:52 mariadb | # you may not use this file except in compliance with the License. 23:16:52 kafka | sasl.login.connect.timeout.ms = null 23:16:52 kafka | sasl.login.read.timeout.ms = null 23:16:52 policy-apex-pdp | sasl.login.class = null 23:16:52 zookeeper | [2024-04-18 23:14:24,590] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.323515537Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=635.226µs 23:16:52 policy-pap | fetch.max.bytes = 52428800 23:16:52 mariadb | # You may obtain a copy of the License at 23:16:52 kafka | sasl.login.refresh.buffer.seconds = 300 23:16:52 kafka | sasl.login.refresh.min.period.seconds = 60 23:16:52 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:52 zookeeper | [2024-04-18 23:14:24,590] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.330476946Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 23:16:52 policy-pap | fetch.max.wait.ms = 500 23:16:52 mariadb | # 23:16:52 kafka | sasl.login.refresh.window.factor = 0.8 23:16:52 kafka | sasl.login.refresh.window.jitter = 0.05 23:16:52 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:52 zookeeper | [2024-04-18 23:14:24,590] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.331255181Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=778.545µs 23:16:52 policy-pap | fetch.min.bytes = 1 23:16:52 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 23:16:52 kafka | sasl.login.retry.backoff.max.ms = 10000 23:16:52 kafka | sasl.login.retry.backoff.ms = 100 23:16:52 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:52 zookeeper | [2024-04-18 23:14:24,590] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.333900473Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 23:16:52 policy-pap | group.id = deefd98f-1600-442c-a15a-d2ceba267151 23:16:52 mariadb | # 23:16:52 kafka | sasl.mechanism.controller.protocol = GSSAPI 23:16:52 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 23:16:52 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:52 zookeeper | [2024-04-18 23:14:24,590] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.334996456Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.111233ms 23:16:52 policy-pap | group.instance.id = null 23:16:52 mariadb | # Unless required by applicable law or agreed to in writing, software 23:16:52 kafka | sasl.oauthbearer.clock.skew.seconds = 30 23:16:52 kafka | sasl.oauthbearer.expected.audience = null 23:16:52 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:52 zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.337844689Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 23:16:52 policy-pap | heartbeat.interval.ms = 3000 23:16:52 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 23:16:52 kafka | sasl.oauthbearer.expected.issuer = null 23:16:52 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:52 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:52 zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:host.name=00ec3651d3eb (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.342935991Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.096893ms 23:16:52 policy-pap | interceptor.classes = [] 23:16:52 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 23:16:52 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:52 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:52 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:52 zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:java.version=11.0.22 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.345586153Z level=info msg="Executing migration" id="create dashboard v2" 23:16:52 policy-pap | internal.leave.group.on.close = true 23:16:52 mariadb | # See the License for the specific language governing permissions and 23:16:52 kafka | sasl.oauthbearer.jwks.endpoint.url = null 23:16:52 kafka | sasl.oauthbearer.scope.claim.name = scope 23:16:52 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:52 zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.346337996Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=751.573µs 23:16:52 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:52 mariadb | # limitations under the License. 23:16:52 kafka | sasl.oauthbearer.sub.claim.name = sub 23:16:52 kafka | sasl.oauthbearer.token.endpoint.url = null 23:16:52 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:52 zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.350798782Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 23:16:52 policy-pap | isolation.level = read_uncommitted 23:16:52 mariadb | 23:16:52 kafka | sasl.server.callback.handler.class = null 23:16:52 kafka | sasl.server.max.receive.size = 524288 23:16:52 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.351501202Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=702.31µs 23:16:52 zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:52 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:52 kafka | security.inter.broker.protocol = PLAINTEXT 23:16:52 kafka | security.providers = null 23:16:52 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.357675636Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 23:16:52 zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 policy-pap | max.partition.fetch.bytes = 1048576 23:16:52 mariadb | do 23:16:52 kafka | server.max.startup.time.ms = 9223372036854775807 23:16:52 kafka | socket.connection.setup.timeout.max.ms = 30000 23:16:52 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.359035164Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.359668ms 23:16:52 zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 policy-pap | max.poll.interval.ms = 300000 23:16:52 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 23:16:52 kafka | socket.connection.setup.timeout.ms = 10000 23:16:52 kafka | socket.listen.backlog.size = 50 23:16:52 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.363576174Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 23:16:52 zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 policy-pap | max.poll.records = 500 23:16:52 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 23:16:52 kafka | socket.receive.buffer.bytes = 102400 23:16:52 kafka | socket.request.max.bytes = 104857600 23:16:52 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.3641978Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=624.036µs 23:16:52 zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 policy-pap | metadata.max.age.ms = 300000 23:16:52 mariadb | done 23:16:52 kafka | socket.send.buffer.bytes = 102400 23:16:52 kafka | ssl.cipher.suites = [] 23:16:52 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.367427635Z level=info msg="Executing migration" id="drop table dashboard_v1" 23:16:52 zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 policy-pap | metric.reporters = [] 23:16:52 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:52 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.368091023Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=663.348µs 23:16:52 zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 policy-pap | metrics.num.samples = 2 23:16:52 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 23:16:52 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.371768364Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 23:16:52 zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 policy-pap | metrics.recording.level = INFO 23:16:52 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:52 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.371820187Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=52.223µs 23:16:52 zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 policy-pap | metrics.sample.window.ms = 30000 23:16:52 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:52 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.374389474Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 23:16:52 zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:52 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 23:16:52 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.376216999Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.825735ms 23:16:52 zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 policy-pap | receive.buffer.bytes = 65536 23:16:52 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:52 policy-apex-pdp | security.providers = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.379695228Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 23:16:52 zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 policy-pap | reconnect.backoff.max.ms = 1000 23:16:52 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:52 policy-apex-pdp | send.buffer.bytes = 131072 23:16:52 zookeeper | [2024-04-18 23:14:24,592] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 policy-pap | reconnect.backoff.ms = 50 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.383056211Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=3.359583ms 23:16:52 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 23:16:52 policy-apex-pdp | session.timeout.ms = 45000 23:16:52 zookeeper | [2024-04-18 23:14:24,592] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 policy-pap | request.timeout.ms = 30000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.387855776Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 23:16:52 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:52 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:52 zookeeper | [2024-04-18 23:14:24,592] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 policy-pap | retry.backoff.ms = 100 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.389125959Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.270293ms 23:16:52 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:52 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:52 zookeeper | [2024-04-18 23:14:24,592] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 policy-pap | sasl.client.callback.handler.class = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.391692986Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 23:16:52 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 23:16:52 policy-apex-pdp | ssl.cipher.suites = null 23:16:52 zookeeper | [2024-04-18 23:14:24,592] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 policy-pap | sasl.jaas.config = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.392245498Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=552.822µs 23:16:52 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:52 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:52 zookeeper | [2024-04-18 23:14:24,592] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.395517676Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 23:16:52 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:52 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:52 zookeeper | [2024-04-18 23:14:24,592] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.398662066Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=3.143771ms 23:16:52 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 23:16:52 policy-apex-pdp | ssl.engine.factory.class = null 23:16:52 zookeeper | [2024-04-18 23:14:24,593] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 policy-pap | sasl.kerberos.service.name = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.40308992Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 23:16:52 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:52 policy-apex-pdp | ssl.key.password = null 23:16:52 zookeeper | [2024-04-18 23:14:24,593] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 23:16:52 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.403867274Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=776.594µs 23:16:52 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:52 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:52 zookeeper | [2024-04-18 23:14:24,595] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.406964102Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 23:16:52 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 23:16:52 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:52 zookeeper | [2024-04-18 23:14:24,595] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 policy-pap | sasl.login.callback.handler.class = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.407710375Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=750.613µs 23:16:52 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:52 policy-apex-pdp | ssl.keystore.key = null 23:16:52 zookeeper | [2024-04-18 23:14:24,596] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:52 policy-pap | sasl.login.class = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.410463253Z level=info msg="Executing migration" id="Update dashboard table charset" 23:16:52 mariadb | 23:16:52 policy-db-migrator | 23:16:52 policy-apex-pdp | ssl.keystore.location = null 23:16:52 zookeeper | [2024-04-18 23:14:24,596] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:52 policy-pap | sasl.login.connect.timeout.ms = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.410488444Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=26.382µs 23:16:52 kafka | ssl.client.auth = none 23:16:52 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 23:16:52 policy-db-migrator | 23:16:52 policy-apex-pdp | ssl.keystore.password = null 23:16:52 zookeeper | [2024-04-18 23:14:24,596] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:52 policy-pap | sasl.login.read.timeout.ms = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.414836233Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 23:16:52 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:52 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 23:16:52 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 23:16:52 policy-apex-pdp | ssl.keystore.type = JKS 23:16:52 zookeeper | [2024-04-18 23:14:24,596] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:52 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.414918858Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=83.945µs 23:16:52 kafka | ssl.endpoint.identification.algorithm = https 23:16:52 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:52 zookeeper | [2024-04-18 23:14:24,597] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:52 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.417231291Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 23:16:52 kafka | ssl.engine.factory.class = null 23:16:52 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:52 policy-apex-pdp | ssl.provider = null 23:16:52 zookeeper | [2024-04-18 23:14:24,597] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:52 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.420395712Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.163561ms 23:16:52 kafka | ssl.key.password = null 23:16:52 mariadb | 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:52 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.42348988Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 23:16:52 zookeeper | [2024-04-18 23:14:24,597] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:52 kafka | ssl.keymanager.algorithm = SunX509 23:16:52 mariadb | 2024-04-18 23:14:22+00:00 [Note] [Entrypoint]: Stopping temporary server 23:16:52 policy-db-migrator | 23:16:52 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:52 policy-apex-pdp | ssl.truststore.certificates = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.426144982Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.654883ms 23:16:52 zookeeper | [2024-04-18 23:14:24,597] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:52 kafka | ssl.keystore.certificate.chain = null 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 23:16:52 policy-db-migrator | 23:16:52 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:52 policy-apex-pdp | ssl.truststore.location = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.429124883Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 23:16:52 zookeeper | [2024-04-18 23:14:24,599] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 kafka | ssl.keystore.key = null 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: FTS optimize thread exiting. 23:16:52 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 23:16:52 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:52 policy-apex-pdp | ssl.truststore.password = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.431147599Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.021805ms 23:16:52 zookeeper | [2024-04-18 23:14:24,599] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 kafka | ssl.keystore.location = null 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Starting shutdown... 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | sasl.mechanism = GSSAPI 23:16:52 policy-apex-pdp | ssl.truststore.type = JKS 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.43728094Z level=info msg="Executing migration" id="Add column uid in dashboard" 23:16:52 zookeeper | [2024-04-18 23:14:24,600] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 23:16:52 kafka | ssl.keystore.password = null 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:52 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:52 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.439303526Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.022666ms 23:16:52 zookeeper | [2024-04-18 23:14:24,600] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 23:16:52 kafka | ssl.keystore.type = JKS 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Buffer pool(s) dump completed at 240418 23:14:22 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:52 policy-apex-pdp | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.450151438Z level=info msg="Executing migration" id="Update uid column values in dashboard" 23:16:52 zookeeper | [2024-04-18 23:14:24,600] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 kafka | ssl.principal.mapping.rules = DEFAULT 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 23:16:52 policy-db-migrator | 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.204+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.450606974Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=460.786µs 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.460160922Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 23:16:52 zookeeper | [2024-04-18 23:14:24,620] INFO Logging initialized @617ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 23:16:52 kafka | ssl.protocol = TLSv1.3 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Shutdown completed; log sequence number 328781; transaction id 298 23:16:52 policy-db-migrator | 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.204+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:52 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.461778345Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.621583ms 23:16:52 zookeeper | [2024-04-18 23:14:24,718] WARN o.e.j.s.ServletContextHandler@6d5620ce{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 23:16:52 kafka | ssl.provider = null 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] mariadbd: Shutdown complete 23:16:52 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.204+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713482096202 23:16:52 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.473215811Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 23:16:52 zookeeper | [2024-04-18 23:14:24,718] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 23:16:52 kafka | ssl.secure.random.implementation = null 23:16:52 mariadb | 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.206+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-1, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Subscribed to topic(s): policy-pdp-pap 23:16:52 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.474916838Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.701598ms 23:16:52 zookeeper | [2024-04-18 23:14:24,738] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 11.0.22+7-LTS (org.eclipse.jetty.server.Server) 23:16:52 kafka | ssl.trustmanager.algorithm = PKIX 23:16:52 mariadb | 2024-04-18 23:14:22+00:00 [Note] [Entrypoint]: Temporary server stopped 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.217+00:00|INFO|ServiceManager|main] service manager starting 23:16:52 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.480408153Z level=info msg="Executing migration" id="Update dashboard title length" 23:16:52 zookeeper | [2024-04-18 23:14:24,768] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 23:16:52 kafka | ssl.truststore.certificates = null 23:16:52 mariadb | 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.217+00:00|INFO|ServiceManager|main] service manager starting topics 23:16:52 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.480442815Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=36.352µs 23:16:52 zookeeper | [2024-04-18 23:14:24,769] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 23:16:52 kafka | ssl.truststore.location = null 23:16:52 mariadb | 2024-04-18 23:14:22+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 23:16:52 policy-db-migrator | 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.219+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 23:16:52 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.488819435Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 23:16:52 zookeeper | [2024-04-18 23:14:24,770] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) 23:16:52 kafka | ssl.truststore.password = null 23:16:52 mariadb | 23:16:52 policy-db-migrator | 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.244+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:52 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.490341233Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.522087ms 23:16:52 zookeeper | [2024-04-18 23:14:24,773] WARN ServletContext@o.e.j.s.ServletContextHandler@6d5620ce{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 23:16:52 kafka | ssl.truststore.type = JKS 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 23:16:52 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 23:16:52 policy-apex-pdp | allow.auto.create.topics = true 23:16:52 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.498046304Z level=info msg="Executing migration" id="create dashboard_provisioning" 23:16:52 zookeeper | [2024-04-18 23:14:24,782] INFO Started o.e.j.s.ServletContextHandler@6d5620ce{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 23:16:52 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:52 policy-pap | security.protocol = PLAINTEXT 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.500663504Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=2.6073ms 23:16:52 zookeeper | [2024-04-18 23:14:24,798] INFO Started ServerConnector@4d1bf319{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 23:16:52 kafka | transaction.max.timeout.ms = 900000 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Number of transaction pools: 1 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:52 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:52 policy-pap | security.providers = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.575690026Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 23:16:52 zookeeper | [2024-04-18 23:14:24,798] INFO Started @795ms (org.eclipse.jetty.server.Server) 23:16:52 kafka | transaction.partition.verification.enable = true 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | auto.offset.reset = latest 23:16:52 policy-pap | send.buffer.bytes = 131072 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.585482398Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=9.791922ms 23:16:52 zookeeper | [2024-04-18 23:14:24,798] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 23:16:52 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:52 policy-db-migrator | 23:16:52 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:52 policy-pap | session.timeout.ms = 45000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.591444689Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 23:16:52 zookeeper | [2024-04-18 23:14:24,802] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:52 kafka | transaction.state.log.load.buffer.size = 5242880 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:52 policy-db-migrator | 23:16:52 policy-apex-pdp | check.crcs = true 23:16:52 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.592134849Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=684.96µs 23:16:52 zookeeper | [2024-04-18 23:14:24,804] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:52 kafka | transaction.state.log.min.isr = 2 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:52 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 23:16:52 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:52 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.595786198Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 23:16:52 zookeeper | [2024-04-18 23:14:24,805] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:52 kafka | transaction.state.log.num.partitions = 50 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | client.id = consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2 23:16:52 policy-pap | ssl.cipher.suites = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.597224531Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.437493ms 23:16:52 zookeeper | [2024-04-18 23:14:24,807] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:52 kafka | transaction.state.log.replication.factor = 3 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:52 policy-apex-pdp | client.rack = 23:16:52 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.601381989Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 23:16:52 zookeeper | [2024-04-18 23:14:24,823] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:52 kafka | transaction.state.log.segment.bytes = 104857600 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.60279171Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.409761ms 23:16:52 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:52 zookeeper | [2024-04-18 23:14:24,823] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:52 kafka | transactional.id.expiration.ms = 604800000 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: 128 rollback segments are active. 23:16:52 policy-db-migrator | 23:16:52 policy-pap | ssl.engine.factory.class = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.607406025Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 23:16:52 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:52 zookeeper | [2024-04-18 23:14:24,824] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 23:16:52 kafka | unclean.leader.election.enable = false 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:52 policy-db-migrator | 23:16:52 policy-pap | ssl.key.password = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.607758035Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=351.521µs 23:16:52 policy-apex-pdp | enable.auto.commit = true 23:16:52 zookeeper | [2024-04-18 23:14:24,824] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 23:16:52 kafka | unstable.api.versions.enable = false 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:52 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 23:16:52 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.612256603Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 23:16:52 policy-apex-pdp | exclude.internal.topics = true 23:16:52 zookeeper | [2024-04-18 23:14:24,829] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 23:16:52 kafka | zookeeper.clientCnxnSocket = null 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: log sequence number 328781; transaction id 299 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | ssl.keystore.certificate.chain = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.6134358Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=1.177747ms 23:16:52 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:52 zookeeper | [2024-04-18 23:14:24,829] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:52 kafka | zookeeper.connect = zookeeper:2181 23:16:52 kafka | zookeeper.connection.timeout.ms = null 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.618192983Z level=info msg="Executing migration" id="Add check_sum column" 23:16:52 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:52 zookeeper | [2024-04-18 23:14:24,833] INFO Snapshot loaded in 8 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 23:16:52 policy-pap | ssl.keystore.key = null 23:16:52 kafka | zookeeper.max.in.flight.requests = 10 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.621766108Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.573765ms 23:16:52 policy-apex-pdp | fetch.min.bytes = 1 23:16:52 zookeeper | [2024-04-18 23:14:24,833] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:52 policy-pap | ssl.keystore.location = null 23:16:52 kafka | zookeeper.metadata.migration.enable = false 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.625373695Z level=info msg="Executing migration" id="Add index for dashboard_title" 23:16:52 policy-apex-pdp | group.id = dbe3acf0-ba50-4571-9b48-e58d24ad2dc5 23:16:52 zookeeper | [2024-04-18 23:14:24,834] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:52 policy-pap | ssl.keystore.password = null 23:16:52 kafka | zookeeper.metadata.migration.min.batch.size = 200 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.626362622Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=988.686µs 23:16:52 policy-apex-pdp | group.instance.id = null 23:16:52 zookeeper | [2024-04-18 23:14:24,843] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 23:16:52 policy-pap | ssl.keystore.type = JKS 23:16:52 kafka | zookeeper.session.timeout.ms = 18000 23:16:52 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 23:16:52 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.631502646Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 23:16:52 zookeeper | [2024-04-18 23:14:24,843] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] Server socket created on IP: '0.0.0.0'. 23:16:52 policy-pap | ssl.protocol = TLSv1.3 23:16:52 kafka | zookeeper.set.acl = false 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | interceptor.classes = [] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.631771442Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=269.196µs 23:16:52 zookeeper | [2024-04-18 23:14:24,856] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] Server socket created on IP: '::'. 23:16:52 policy-pap | ssl.provider = null 23:16:52 kafka | zookeeper.ssl.cipher.suites = null 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:52 policy-apex-pdp | internal.leave.group.on.close = true 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.635007557Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 23:16:52 zookeeper | [2024-04-18 23:14:24,857] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] mariadbd: ready for connections. 23:16:52 policy-pap | ssl.secure.random.implementation = null 23:16:52 kafka | zookeeper.ssl.client.enable = false 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.635210519Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=203.112µs 23:16:52 zookeeper | [2024-04-18 23:14:27,353] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 23:16:52 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 23:16:52 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:52 kafka | zookeeper.ssl.crl.enable = false 23:16:52 policy-db-migrator | 23:16:52 policy-apex-pdp | isolation.level = read_uncommitted 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.638253463Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 23:16:52 mariadb | 2024-04-18 23:14:22 0 [Note] InnoDB: Buffer pool(s) load completed at 240418 23:14:22 23:16:52 policy-pap | ssl.truststore.certificates = null 23:16:52 kafka | zookeeper.ssl.enabled.protocols = null 23:16:52 policy-db-migrator | 23:16:52 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.639175016Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=921.433µs 23:16:52 mariadb | 2024-04-18 23:14:23 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) 23:16:52 policy-pap | ssl.truststore.location = null 23:16:52 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 23:16:52 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 23:16:52 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.643379607Z level=info msg="Executing migration" id="Add isPublic for dashboard" 23:16:52 mariadb | 2024-04-18 23:14:23 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) 23:16:52 policy-pap | ssl.truststore.password = null 23:16:52 kafka | zookeeper.ssl.keystore.location = null 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.645719801Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.339324ms 23:16:52 mariadb | 2024-04-18 23:14:24 35 [Warning] Aborted connection 35 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 23:16:52 policy-pap | ssl.truststore.type = JKS 23:16:52 kafka | zookeeper.ssl.keystore.password = null 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:52 policy-apex-pdp | max.poll.records = 500 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.663787917Z level=info msg="Executing migration" id="create data_source table" 23:16:52 mariadb | 2024-04-18 23:14:25 82 [Warning] Aborted connection 82 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 23:16:52 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:52 kafka | zookeeper.ssl.keystore.type = null 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.665854996Z level=info msg="Migration successfully executed" id="create data_source table" duration=2.071159ms 23:16:52 policy-pap | 23:16:52 kafka | zookeeper.ssl.ocsp.enable = false 23:16:52 policy-db-migrator | 23:16:52 policy-apex-pdp | metric.reporters = [] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.670079748Z level=info msg="Executing migration" id="add index data_source.account_id" 23:16:52 policy-pap | [2024-04-18T23:14:52.340+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:52 kafka | zookeeper.ssl.protocol = TLSv1.2 23:16:52 policy-db-migrator | 23:16:52 policy-apex-pdp | metrics.num.samples = 2 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.670750417Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=669.548µs 23:16:52 policy-pap | [2024-04-18T23:14:52.340+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:52 kafka | zookeeper.ssl.truststore.location = null 23:16:52 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 23:16:52 policy-apex-pdp | metrics.recording.level = INFO 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.673061129Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 23:16:52 policy-pap | [2024-04-18T23:14:52.340+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713482092339 23:16:52 kafka | zookeeper.ssl.truststore.password = null 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.674236976Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.173767ms 23:16:52 policy-pap | [2024-04-18T23:14:52.343+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-1, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Subscribed to topic(s): policy-pdp-pap 23:16:52 kafka | zookeeper.ssl.truststore.type = null 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:52 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:52 policy-pap | [2024-04-18T23:14:52.344+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:52 kafka | (kafka.server.KafkaConfig) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.703463192Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 23:16:52 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:52 policy-pap | allow.auto.create.topics = true 23:16:52 kafka | [2024-04-18 23:14:29,030] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.704787168Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.315805ms 23:16:52 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:52 policy-pap | auto.commit.interval.ms = 5000 23:16:52 kafka | [2024-04-18 23:14:29,031] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.739471097Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 23:16:52 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:52 policy-pap | auto.include.jmx.reporter = true 23:16:52 kafka | [2024-04-18 23:14:29,032] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:52 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.7411051Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.639134ms 23:16:52 policy-apex-pdp | request.timeout.ms = 30000 23:16:52 policy-pap | auto.offset.reset = latest 23:16:52 kafka | [2024-04-18 23:14:29,036] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.746635238Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 23:16:52 policy-apex-pdp | retry.backoff.ms = 100 23:16:52 policy-pap | bootstrap.servers = [kafka:9092] 23:16:52 kafka | [2024-04-18 23:14:29,070] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.75329989Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.664022ms 23:16:52 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:52 policy-pap | check.crcs = true 23:16:52 kafka | [2024-04-18 23:14:29,076] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.756751058Z level=info msg="Executing migration" id="create data_source table v2" 23:16:52 policy-apex-pdp | sasl.jaas.config = null 23:16:52 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:52 kafka | [2024-04-18 23:14:29,086] INFO Loaded 0 logs in 16ms (kafka.log.LogManager) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.757916364Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.163177ms 23:16:52 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:52 policy-pap | client.id = consumer-policy-pap-2 23:16:52 kafka | [2024-04-18 23:14:29,087] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.760614629Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 23:16:52 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:52 policy-pap | client.rack = 23:16:52 kafka | [2024-04-18 23:14:29,089] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 23:16:52 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.761256886Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=642.087µs 23:16:52 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:52 policy-pap | connections.max.idle.ms = 540000 23:16:52 kafka | [2024-04-18 23:14:29,099] INFO Starting the log cleaner (kafka.log.LogCleaner) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.766931961Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 23:16:52 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:52 policy-pap | default.api.timeout.ms = 60000 23:16:52 kafka | [2024-04-18 23:14:29,144] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.767875905Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=942.784µs 23:16:52 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:52 policy-pap | enable.auto.commit = true 23:16:52 kafka | [2024-04-18 23:14:29,162] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.771071339Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 23:16:52 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:52 policy-pap | exclude.internal.topics = true 23:16:52 kafka | [2024-04-18 23:14:29,181] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.771690174Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=618.026µs 23:16:52 policy-apex-pdp | sasl.login.class = null 23:16:52 policy-pap | fetch.max.bytes = 52428800 23:16:52 kafka | [2024-04-18 23:14:29,221] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.774280623Z level=info msg="Executing migration" id="Add column with_credentials" 23:16:52 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:52 policy-pap | fetch.max.wait.ms = 500 23:16:52 kafka | [2024-04-18 23:14:29,549] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:52 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.776736043Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.455041ms 23:16:52 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:52 policy-pap | fetch.min.bytes = 1 23:16:52 kafka | [2024-04-18 23:14:29,576] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.783091988Z level=info msg="Executing migration" id="Add secure json data column" 23:16:52 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:52 policy-pap | group.id = policy-pap 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:29,577] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.787502881Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=4.409523ms 23:16:52 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:52 policy-pap | group.instance.id = null 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.791084136Z level=info msg="Executing migration" id="Update data_source table charset" 23:16:52 kafka | [2024-04-18 23:14:29,582] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 23:16:52 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:52 policy-pap | heartbeat.interval.ms = 3000 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.791115748Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=31.742µs 23:16:52 kafka | [2024-04-18 23:14:29,587] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:52 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:52 policy-pap | interceptor.classes = [] 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.794773108Z level=info msg="Executing migration" id="Update initial version to 1" 23:16:52 kafka | [2024-04-18 23:14:29,612] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:52 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:52 policy-pap | internal.leave.group.on.close = true 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.795364541Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=598.424µs 23:16:52 kafka | [2024-04-18 23:14:29,614] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:52 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:52 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:52 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.798907425Z level=info msg="Executing migration" id="Add read_only data column" 23:16:52 kafka | [2024-04-18 23:14:29,615] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:52 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:52 policy-pap | isolation.level = read_uncommitted 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.801681434Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.775039ms 23:16:52 kafka | [2024-04-18 23:14:29,618] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:52 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:52 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.806128549Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 23:16:52 kafka | [2024-04-18 23:14:29,621] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:52 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:52 policy-pap | max.partition.fetch.bytes = 1048576 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.806366932Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=238.303µs 23:16:52 kafka | [2024-04-18 23:14:29,635] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 23:16:52 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:52 policy-pap | max.poll.interval.ms = 300000 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.808516556Z level=info msg="Executing migration" id="Update json_data with nulls" 23:16:52 kafka | [2024-04-18 23:14:29,641] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 23:16:52 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:52 policy-pap | max.poll.records = 500 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.808697996Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=185.581µs 23:16:52 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:52 policy-pap | metadata.max.age.ms = 300000 23:16:52 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.811615283Z level=info msg="Executing migration" id="Add uid column" 23:16:52 kafka | [2024-04-18 23:14:29,664] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 23:16:52 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:52 policy-pap | metric.reporters = [] 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.813985759Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.369736ms 23:16:52 kafka | [2024-04-18 23:14:29,691] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1713482069680,1713482069680,1,0,0,72057610558636033,258,0,27 23:16:52 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:52 policy-pap | metrics.num.samples = 2 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.818910301Z level=info msg="Executing migration" id="Update uid value" 23:16:52 kafka | (kafka.zk.KafkaZkClient) 23:16:52 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:52 policy-pap | metrics.recording.level = INFO 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.819137125Z level=info msg="Migration successfully executed" id="Update uid value" duration=226.703µs 23:16:52 kafka | [2024-04-18 23:14:29,692] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 23:16:52 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:52 policy-pap | metrics.sample.window.ms = 30000 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.820934858Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 23:16:52 kafka | [2024-04-18 23:14:29,746] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 23:16:52 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:52 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.82184732Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=907.903µs 23:16:52 kafka | [2024-04-18 23:14:29,752] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:52 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:52 policy-pap | receive.buffer.bytes = 65536 23:16:52 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.82482138Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 23:16:52 kafka | [2024-04-18 23:14:29,760] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:52 policy-apex-pdp | security.providers = null 23:16:52 policy-pap | reconnect.backoff.max.ms = 1000 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.82569461Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=872.69µs 23:16:52 kafka | [2024-04-18 23:14:29,760] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:52 policy-apex-pdp | send.buffer.bytes = 131072 23:16:52 policy-pap | reconnect.backoff.ms = 50 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.830934321Z level=info msg="Executing migration" id="create api_key table" 23:16:52 kafka | [2024-04-18 23:14:29,773] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 23:16:52 policy-apex-pdp | session.timeout.ms = 45000 23:16:52 policy-pap | request.timeout.ms = 30000 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.831755658Z level=info msg="Migration successfully executed" id="create api_key table" duration=820.847µs 23:16:52 kafka | [2024-04-18 23:14:29,775] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 23:16:52 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:52 policy-pap | retry.backoff.ms = 100 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.834618712Z level=info msg="Executing migration" id="add index api_key.account_id" 23:16:52 kafka | [2024-04-18 23:14:29,786] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 23:16:52 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:52 policy-pap | sasl.client.callback.handler.class = null 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.835487922Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=866.43µs 23:16:52 kafka | [2024-04-18 23:14:29,787] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 23:16:52 policy-apex-pdp | ssl.cipher.suites = null 23:16:52 policy-pap | sasl.jaas.config = null 23:16:52 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 23:16:52 kafka | [2024-04-18 23:14:29,793] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.838297593Z level=info msg="Executing migration" id="add index api_key.key" 23:16:52 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:52 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:29,798] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.839139471Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=841.238µs 23:16:52 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:52 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:52 kafka | [2024-04-18 23:14:29,819] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.844722871Z level=info msg="Executing migration" id="add index api_key.account_id_name" 23:16:52 policy-apex-pdp | ssl.engine.factory.class = null 23:16:52 policy-pap | sasl.kerberos.service.name = null 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:29,823] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.845690117Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=929.504µs 23:16:52 policy-apex-pdp | ssl.key.password = null 23:16:52 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:29,823] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.852555071Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 23:16:52 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:52 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:29,826] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.85341627Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=862.83µs 23:16:52 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:52 policy-pap | sasl.login.callback.handler.class = null 23:16:52 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 23:16:52 kafka | [2024-04-18 23:14:29,826] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.858399056Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 23:16:52 policy-apex-pdp | ssl.keystore.key = null 23:16:52 policy-pap | sasl.login.class = null 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:29,837] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.859602855Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.202109ms 23:16:52 policy-apex-pdp | ssl.keystore.location = null 23:16:52 policy-pap | sasl.login.connect.timeout.ms = null 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:52 kafka | [2024-04-18 23:14:29,842] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.869335263Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 23:16:52 policy-apex-pdp | ssl.keystore.password = null 23:16:52 policy-pap | sasl.login.read.timeout.ms = null 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:29,846] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.870955586Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.627443ms 23:16:52 policy-apex-pdp | ssl.keystore.type = JKS 23:16:52 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:29,860] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.874439775Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 23:16:52 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:52 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:29,867] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.881715263Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=7.278767ms 23:16:52 policy-apex-pdp | ssl.provider = null 23:16:52 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:52 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 23:16:52 kafka | [2024-04-18 23:14:29,874] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.88499407Z level=info msg="Executing migration" id="create api_key table v2" 23:16:52 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:52 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:29,881] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.885906303Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=912.073µs 23:16:52 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:52 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 23:16:52 kafka | [2024-04-18 23:14:29,886] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.890421662Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 23:16:52 policy-apex-pdp | ssl.truststore.certificates = null 23:16:52 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:29,893] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.891327394Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=906.122µs 23:16:52 policy-apex-pdp | ssl.truststore.location = null 23:16:52 policy-pap | sasl.mechanism = GSSAPI 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:29,894] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.895115071Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 23:16:52 policy-apex-pdp | ssl.truststore.password = null 23:16:52 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:29,895] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.896719393Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.604542ms 23:16:52 policy-apex-pdp | ssl.truststore.type = JKS 23:16:52 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:52 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.900563853Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 23:16:52 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:52 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:29,895] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.901475806Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=911.182µs 23:16:52 policy-apex-pdp | 23:16:52 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:52 kafka | [2024-04-18 23:14:29,895] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.905324466Z level=info msg="Executing migration" id="copy api_key v1 to v2" 23:16:52 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:29,898] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.252+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.905824925Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=497.829µs 23:16:52 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:29,899] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.252+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.908818236Z level=info msg="Executing migration" id="Drop old table api_key_v1" 23:16:52 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:29,899] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.252+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713482096252 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.909478844Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=660.088µs 23:16:52 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:52 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 23:16:52 kafka | [2024-04-18 23:14:29,900] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.253+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Subscribed to topic(s): policy-pdp-pap 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.913001476Z level=info msg="Executing migration" id="Update api_key table charset" 23:16:52 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:29,900] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.253+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3e7ff64a-7d2c-4a3c-bce3-3be7547dab57, alive=false, publisher=null]]: starting 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.913136264Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=135.278µs 23:16:52 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:52 kafka | [2024-04-18 23:14:29,901] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.265+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.91934124Z level=info msg="Executing migration" id="Add expires to api_key table" 23:16:52 policy-pap | security.protocol = PLAINTEXT 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:29,904] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 23:16:52 policy-apex-pdp | acks = -1 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.925355035Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=6.011054ms 23:16:52 policy-pap | security.providers = null 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:29,905] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 23:16:52 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.92894512Z level=info msg="Executing migration" id="Add service account foreign key" 23:16:52 policy-pap | send.buffer.bytes = 131072 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:29,909] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 23:16:52 policy-apex-pdp | batch.size = 16384 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.931575161Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.626391ms 23:16:52 policy-pap | session.timeout.ms = 45000 23:16:52 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 23:16:52 kafka | [2024-04-18 23:14:29,918] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 23:16:52 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.934841649Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 23:16:52 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | buffer.memory = 33554432 23:16:52 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:52 kafka | [2024-04-18 23:14:29,920] INFO Kafka version: 7.6.1-ccs (org.apache.kafka.common.utils.AppInfoParser) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.935084382Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=243.914µs 23:16:52 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:52 policy-pap | ssl.cipher.suites = null 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:29,920] INFO Kafka commitId: 11e81ad2a49db00b1d2b8c731409cd09e563de67 (org.apache.kafka.common.utils.AppInfoParser) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.939305195Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 23:16:52 policy-apex-pdp | client.id = producer-1 23:16:52 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:29,920] INFO Kafka startTimeMs: 1713482069914 (org.apache.kafka.common.utils.AppInfoParser) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.941964577Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.658773ms 23:16:52 policy-apex-pdp | compression.type = none 23:16:52 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:29,920] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.945417415Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 23:16:52 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:52 policy-pap | ssl.engine.factory.class = null 23:16:52 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 23:16:52 kafka | [2024-04-18 23:14:29,925] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.948022314Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.603839ms 23:16:52 policy-pap | ssl.key.password = null 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:29,929] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.951504994Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 23:16:52 policy-apex-pdp | delivery.timeout.ms = 120000 23:16:52 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.952405646Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=900.561µs 23:16:52 policy-apex-pdp | enable.idempotence = true 23:16:52 kafka | [2024-04-18 23:14:29,929] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 23:16:52 policy-pap | ssl.keystore.certificate.chain = null 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.956505031Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 23:16:52 policy-apex-pdp | interceptor.classes = [] 23:16:52 kafka | [2024-04-18 23:14:29,930] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 23:16:52 policy-pap | ssl.keystore.key = null 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.957150328Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=644.756µs 23:16:52 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:52 kafka | [2024-04-18 23:14:29,932] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 23:16:52 policy-pap | ssl.keystore.location = null 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.960619927Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 23:16:52 policy-apex-pdp | linger.ms = 0 23:16:52 kafka | [2024-04-18 23:14:29,935] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 23:16:52 policy-pap | ssl.keystore.password = null 23:16:52 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 23:16:52 policy-apex-pdp | max.block.ms = 60000 23:16:52 kafka | [2024-04-18 23:14:29,938] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 23:16:52 policy-pap | ssl.keystore.type = JKS 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.961778803Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.156617ms 23:16:52 policy-apex-pdp | max.in.flight.requests.per.connection = 5 23:16:52 kafka | [2024-04-18 23:14:29,938] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 23:16:52 policy-pap | ssl.protocol = TLSv1.3 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.965023979Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 23:16:52 policy-apex-pdp | max.request.size = 1048576 23:16:52 kafka | [2024-04-18 23:14:29,946] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 23:16:52 policy-pap | ssl.provider = null 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.965911Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=886.831µs 23:16:52 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:52 kafka | [2024-04-18 23:14:29,946] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 23:16:52 policy-pap | ssl.secure.random.implementation = null 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.97079167Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 23:16:52 policy-apex-pdp | metadata.max.idle.ms = 300000 23:16:52 kafka | [2024-04-18 23:14:29,947] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 23:16:52 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.971728364Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=938.063µs 23:16:52 policy-apex-pdp | metric.reporters = [] 23:16:52 kafka | [2024-04-18 23:14:29,947] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 23:16:52 policy-pap | ssl.truststore.certificates = null 23:16:52 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.975119778Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 23:16:52 policy-apex-pdp | metrics.num.samples = 2 23:16:52 kafka | [2024-04-18 23:14:29,949] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 23:16:52 policy-pap | ssl.truststore.location = null 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.975984348Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=864.229µs 23:16:52 policy-apex-pdp | metrics.recording.level = INFO 23:16:52 kafka | [2024-04-18 23:14:29,964] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 23:16:52 policy-pap | ssl.truststore.password = null 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.979738493Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 23:16:52 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:52 kafka | [2024-04-18 23:14:29,994] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:52 policy-pap | ssl.truststore.type = JKS 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.979907312Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=170.029µs 23:16:52 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 23:16:52 kafka | [2024-04-18 23:14:29,998] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:52 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.983503749Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 23:16:52 policy-apex-pdp | partitioner.availability.timeout.ms = 0 23:16:52 kafka | [2024-04-18 23:14:30,033] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:52 policy-pap | 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.983593504Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=88.686µs 23:16:52 policy-apex-pdp | partitioner.class = null 23:16:52 kafka | [2024-04-18 23:14:34,967] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 23:16:52 policy-pap | [2024-04-18T23:14:52.349+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:52 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 23:16:52 policy-apex-pdp | partitioner.ignore.keys = false 23:16:52 kafka | [2024-04-18 23:14:34,967] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.988298704Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 23:16:52 policy-pap | [2024-04-18T23:14:52.349+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | receive.buffer.bytes = 32768 23:16:52 kafka | [2024-04-18 23:14:54,784] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.992683595Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.384982ms 23:16:52 policy-pap | [2024-04-18T23:14:52.349+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713482092349 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:52 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:52 kafka | [2024-04-18 23:14:54,790] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.996361516Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 23:16:52 policy-pap | [2024-04-18T23:14:52.350+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:52 kafka | [2024-04-18 23:14:54,793] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:22.999157186Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.79338ms 23:16:52 policy-pap | [2024-04-18T23:14:52.805+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 23:16:52 policy-db-migrator | 23:16:52 policy-apex-pdp | request.timeout.ms = 30000 23:16:52 kafka | [2024-04-18 23:14:54,795] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.002674578Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:14:52.974+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:52 policy-apex-pdp | retries = 2147483647 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.002826497Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=151.888µs 23:16:52 kafka | [2024-04-18 23:14:54,825] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(idJrMUf2Q6auoCOWuYUphA),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(Ri5cls-BQlq9q6kFJBomtA),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 23:16:52 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 23:16:52 policy-pap | [2024-04-18T23:14:53.238+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@cea67b1, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@5d98364c, org.springframework.security.web.context.SecurityContextHolderFilter@76105ac0, org.springframework.security.web.header.HeaderWriterFilter@42805abe, org.springframework.security.web.authentication.logout.LogoutFilter@1870b9b8, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@2aeb7c4c, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@30cb223b, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@50e24ea4, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@23d23d98, org.springframework.security.web.access.ExceptionTranslationFilter@20f99c18, org.springframework.security.web.access.intercept.AuthorizationFilter@4fd63c43] 23:16:52 policy-apex-pdp | retry.backoff.ms = 100 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.007412345Z level=info msg="Executing migration" id="create quota table v1" 23:16:52 kafka | [2024-04-18 23:14:54,827] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:14:54.119+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:52 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:52 kafka | [2024-04-18 23:14:54,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.221+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:52 policy-apex-pdp | sasl.jaas.config = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.0082226Z level=info msg="Migration successfully executed" id="create quota table v1" duration=810.335µs 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:52 kafka | [2024-04-18 23:14:54,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.243+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 23:16:52 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.011784037Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:54,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.264+00:00|INFO|ServiceManager|main] Policy PAP starting 23:16:52 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.013259679Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.474532ms 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:54,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.264+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 23:16:52 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.017223769Z level=info msg="Executing migration" id="Update quota table charset" 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.265+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 23:16:52 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.017382027Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=159.329µs 23:16:52 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 23:16:52 kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.266+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.021001418Z level=info msg="Executing migration" id="create plugin_setting table" 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.266+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 23:16:52 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.021888787Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=886.899µs 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 23:16:52 kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.266+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 23:16:52 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.026427109Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.267+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 23:16:52 policy-apex-pdp | sasl.login.class = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.027403233Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=975.885µs 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.269+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=deefd98f-1600-442c-a15a-d2ceba267151, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@178ebac3 23:16:52 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.030915437Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.280+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=deefd98f-1600-442c-a15a-d2ceba267151, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:52 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.034083793Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.166346ms 23:16:52 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 23:16:52 kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.038653946Z level=info msg="Executing migration" id="Update plugin_setting table charset" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:14:54.281+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:52 kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.038765493Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=111.606µs 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:52 policy-pap | allow.auto.create.topics = true 23:16:52 kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.053348901Z level=info msg="Executing migration" id="create session table" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | auto.commit.interval.ms = 5000 23:16:52 kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.05513132Z level=info msg="Migration successfully executed" id="create session table" duration=1.786869ms 23:16:52 policy-db-migrator | 23:16:52 policy-pap | auto.include.jmx.reporter = true 23:16:52 kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.05909823Z level=info msg="Executing migration" id="Drop old table playlist table" 23:16:52 policy-db-migrator | 23:16:52 policy-pap | auto.offset.reset = latest 23:16:52 kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.059230767Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=133.498µs 23:16:52 policy-db-migrator | > upgrade 0450-pdpgroup.sql 23:16:52 policy-pap | bootstrap.servers = [kafka:9092] 23:16:52 kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.063049859Z level=info msg="Executing migration" id="Drop old table playlist_item table" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | check.crcs = true 23:16:52 kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.063182656Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=134.898µs 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 23:16:52 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:52 kafka | [2024-04-18 23:14:54,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.066261147Z level=info msg="Executing migration" id="create playlist table v2" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | client.id = consumer-deefd98f-1600-442c-a15a-d2ceba267151-3 23:16:52 kafka | [2024-04-18 23:14:54,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.067019169Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=757.032µs 23:16:52 policy-db-migrator | 23:16:52 policy-pap | client.rack = 23:16:52 kafka | [2024-04-18 23:14:54,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.069830884Z level=info msg="Executing migration" id="create playlist item table v2" 23:16:52 policy-db-migrator | 23:16:52 policy-pap | connections.max.idle.ms = 540000 23:16:52 kafka | [2024-04-18 23:14:54,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.070636559Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=805.495µs 23:16:52 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 23:16:52 policy-pap | default.api.timeout.ms = 60000 23:16:52 kafka | [2024-04-18 23:14:54,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.073851677Z level=info msg="Executing migration" id="Update playlist table charset" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | enable.auto.commit = true 23:16:52 kafka | [2024-04-18 23:14:54,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.073884969Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=33.392µs 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:52 policy-pap | exclude.internal.topics = true 23:16:52 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:52 kafka | [2024-04-18 23:14:54,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.078384589Z level=info msg="Executing migration" id="Update playlist_item table charset" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | fetch.max.bytes = 52428800 23:16:52 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:52 kafka | [2024-04-18 23:14:54,832] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.078421851Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=37.593µs 23:16:52 policy-db-migrator | 23:16:52 policy-pap | fetch.max.wait.ms = 500 23:16:52 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:52 kafka | [2024-04-18 23:14:54,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.081532393Z level=info msg="Executing migration" id="Add playlist column created_at" 23:16:52 policy-db-migrator | 23:16:52 policy-pap | fetch.min.bytes = 1 23:16:52 kafka | [2024-04-18 23:14:54,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.087294292Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=5.760249ms 23:16:52 policy-db-migrator | > upgrade 0470-pdp.sql 23:16:52 policy-pap | group.id = deefd98f-1600-442c-a15a-d2ceba267151 23:16:52 kafka | [2024-04-18 23:14:54,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | security.providers = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.09048836Z level=info msg="Executing migration" id="Add playlist column updated_at" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | group.instance.id = null 23:16:52 kafka | [2024-04-18 23:14:54,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | send.buffer.bytes = 131072 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.092954496Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.463997ms 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:52 policy-pap | heartbeat.interval.ms = 3000 23:16:52 kafka | [2024-04-18 23:14:54,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.097399733Z level=info msg="Executing migration" id="drop preferences table v2" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | interceptor.classes = [] 23:16:52 kafka | [2024-04-18 23:14:54,833] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.097462046Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=62.843µs 23:16:52 policy-db-migrator | 23:16:52 policy-pap | internal.leave.group.on.close = true 23:16:52 kafka | [2024-04-18 23:14:54,833] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | ssl.cipher.suites = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.100746558Z level=info msg="Executing migration" id="drop preferences table v3" 23:16:52 policy-db-migrator | 23:16:52 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:52 kafka | [2024-04-18 23:14:54,833] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.100817842Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=71.634µs 23:16:52 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 23:16:52 policy-pap | isolation.level = read_uncommitted 23:16:52 kafka | [2024-04-18 23:14:54,833] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.103605367Z level=info msg="Executing migration" id="create preferences table v3" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:52 kafka | [2024-04-18 23:14:54,833] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | ssl.engine.factory.class = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.104543029Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=937.752µs 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 23:16:52 policy-pap | max.partition.fetch.bytes = 1048576 23:16:52 kafka | [2024-04-18 23:14:54,833] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | ssl.key.password = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.110182021Z level=info msg="Executing migration" id="Update preferences table charset" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | max.poll.interval.ms = 300000 23:16:52 kafka | [2024-04-18 23:14:54,833] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.110204922Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=22.971µs 23:16:52 policy-db-migrator | 23:16:52 policy-pap | max.poll.records = 500 23:16:52 kafka | [2024-04-18 23:14:54,833] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.113831653Z level=info msg="Executing migration" id="Add column team_id in preferences" 23:16:52 policy-db-migrator | 23:16:52 policy-pap | metadata.max.age.ms = 300000 23:16:52 kafka | [2024-04-18 23:14:54,833] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | ssl.keystore.key = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.119552411Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=5.650493ms 23:16:52 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 23:16:52 policy-pap | metric.reporters = [] 23:16:52 kafka | [2024-04-18 23:14:54,834] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | ssl.keystore.location = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.123275567Z level=info msg="Executing migration" id="Update team_id column values in preferences" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | metrics.num.samples = 2 23:16:52 kafka | [2024-04-18 23:14:54,834] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | ssl.keystore.password = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.123661498Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=382.031µs 23:16:52 policy-pap | metrics.recording.level = INFO 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:52 kafka | [2024-04-18 23:14:54,834] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | ssl.keystore.type = JKS 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.127148922Z level=info msg="Executing migration" id="Add column week_start in preferences" 23:16:52 policy-pap | metrics.sample.window.ms = 30000 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:54,834] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.129908005Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.755643ms 23:16:52 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:54,834] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | ssl.provider = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.132806565Z level=info msg="Executing migration" id="Add column preferences.json_data" 23:16:52 policy-pap | receive.buffer.bytes = 65536 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:54,834] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:52 policy-pap | reconnect.backoff.max.ms = 1000 23:16:52 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 23:16:52 kafka | [2024-04-18 23:14:54,834] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.139735559Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=6.927784ms 23:16:52 policy-pap | reconnect.backoff.ms = 50 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-apex-pdp | ssl.truststore.certificates = null 23:16:52 kafka | [2024-04-18 23:14:54,834] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-pap | request.timeout.ms = 30000 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.143767503Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 23:16:52 policy-apex-pdp | ssl.truststore.location = null 23:16:52 kafka | [2024-04-18 23:14:54,834] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-pap | retry.backoff.ms = 100 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.143864398Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=97.175µs 23:16:52 policy-apex-pdp | ssl.truststore.password = null 23:16:52 kafka | [2024-04-18 23:14:54,834] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:52 policy-pap | sasl.client.callback.handler.class = null 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.15164746Z level=info msg="Executing migration" id="Add preferences index org_id" 23:16:52 policy-apex-pdp | ssl.truststore.type = JKS 23:16:52 kafka | [2024-04-18 23:14:54,835] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:52 policy-pap | sasl.jaas.config = null 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.152617573Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=975.794µs 23:16:52 policy-apex-pdp | transaction.timeout.ms = 60000 23:16:52 kafka | [2024-04-18 23:14:54,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:52 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.157222179Z level=info msg="Executing migration" id="Add preferences index user_id" 23:16:52 policy-apex-pdp | transactional.id = null 23:16:52 kafka | [2024-04-18 23:14:54,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.157854474Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=632.265µs 23:16:52 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:52 kafka | [2024-04-18 23:14:54,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | sasl.kerberos.service.name = null 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.160820768Z level=info msg="Executing migration" id="create alert table v1" 23:16:52 policy-apex-pdp | 23:16:52 kafka | [2024-04-18 23:14:54,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.161965401Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.144313ms 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.273+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:52 kafka | [2024-04-18 23:14:54,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.165032751Z level=info msg="Executing migration" id="add index alert org_id & id " 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.289+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:52 kafka | [2024-04-18 23:14:54,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.165954973Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=921.431µs 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.289+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:52 kafka | [2024-04-18 23:14:54,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | sasl.login.callback.handler.class = null 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.171712482Z level=info msg="Executing migration" id="add index alert state" 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.289+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713482096289 23:16:52 kafka | [2024-04-18 23:14:54,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | sasl.login.class = null 23:16:52 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.172542638Z level=info msg="Migration successfully executed" id="add index alert state" duration=827.265µs 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.289+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3e7ff64a-7d2c-4a3c-bce3-3be7547dab57, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:52 kafka | [2024-04-18 23:14:54,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | sasl.login.connect.timeout.ms = null 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.178550931Z level=info msg="Executing migration" id="add index alert dashboard_id" 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.289+00:00|INFO|ServiceManager|main] service manager starting set alive 23:16:52 kafka | [2024-04-18 23:14:54,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | sasl.login.read.timeout.ms = null 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.179648772Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.101011ms 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.289+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 23:16:52 kafka | [2024-04-18 23:14:54,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.18484281Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.291+00:00|INFO|ServiceManager|main] service manager starting topic sinks 23:16:52 kafka | [2024-04-18 23:14:54,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.185543138Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=700.659µs 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.291+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 23:16:52 kafka | [2024-04-18 23:14:54,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.189109896Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.293+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 23:16:52 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:52 kafka | [2024-04-18 23:14:54,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.189965053Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=855.287µs 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.293+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 23:16:52 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:52 kafka | [2024-04-18 23:14:54,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.193190182Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.293+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 23:16:52 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:52 kafka | [2024-04-18 23:14:54,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.193994327Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=803.965µs 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.293+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@607fbe09 23:16:52 policy-pap | sasl.mechanism = GSSAPI 23:16:52 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.199169284Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.293+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 23:16:52 kafka | [2024-04-18 23:14:54,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.209237362Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.067588ms 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.293+00:00|INFO|ServiceManager|main] service manager starting Create REST server 23:16:52 kafka | [2024-04-18 23:14:54,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.216039929Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.308+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 23:16:52 kafka | [2024-04-18 23:14:54,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:52 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.216551997Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=512.568µs 23:16:52 policy-apex-pdp | [] 23:16:52 kafka | [2024-04-18 23:14:54,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.221515062Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.311+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:52 kafka | [2024-04-18 23:14:54,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.222346078Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=831.136µs 23:16:52 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"46ccc948-34e0-4af2-90d0-f747053a8608","timestampMs":1713482096295,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup"} 23:16:52 kafka | [2024-04-18 23:14:54,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.226985745Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.453+00:00|INFO|ServiceManager|main] service manager starting Rest Server 23:16:52 kafka | [2024-04-18 23:14:54,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.227260901Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=274.936µs 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.453+00:00|INFO|ServiceManager|main] service manager starting 23:16:52 kafka | [2024-04-18 23:14:54,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.232802828Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.453+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 23:16:52 kafka | [2024-04-18 23:14:54,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:52 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.233338388Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=533.039µs 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.453+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5aabbb29{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@72c927f1{/,null,STOPPED}, connector=RestServerParameters@53ab0286{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:52 kafka | [2024-04-18 23:14:54,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | security.protocol = PLAINTEXT 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.236283091Z level=info msg="Executing migration" id="create alert_notification table v1" 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.463+00:00|INFO|ServiceManager|main] service manager started 23:16:52 kafka | [2024-04-18 23:14:54,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | security.providers = null 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.237101566Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=817.825µs 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.463+00:00|INFO|ServiceManager|main] service manager started 23:16:52 kafka | [2024-04-18 23:14:54,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | send.buffer.bytes = 131072 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.241194533Z level=info msg="Executing migration" id="Add column is_default" 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.463+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 23:16:52 kafka | [2024-04-18 23:14:54,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | session.timeout.ms = 45000 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.244687487Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.491163ms 23:16:52 kafka | [2024-04-18 23:14:54,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.463+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5aabbb29{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@72c927f1{/,null,STOPPED}, connector=RestServerParameters@53ab0286{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:52 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.249323134Z level=info msg="Executing migration" id="Add column frequency" 23:16:52 kafka | [2024-04-18 23:14:54,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.615+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Cluster ID: 3CcxO9QMSqWFRVbl82UfdQ 23:16:52 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:52 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.252768175Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.445301ms 23:16:52 kafka | [2024-04-18 23:14:54,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.615+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 3CcxO9QMSqWFRVbl82UfdQ 23:16:52 policy-pap | ssl.cipher.suites = null 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.256375745Z level=info msg="Executing migration" id="Add column send_reminder" 23:16:52 kafka | [2024-04-18 23:14:54,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.617+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 23:16:52 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.259855307Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.476513ms 23:16:52 kafka | [2024-04-18 23:14:54,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.617+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:52 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.263657868Z level=info msg="Executing migration" id="Add column disable_resolve_message" 23:16:52 kafka | [2024-04-18 23:14:54,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.628+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] (Re-)joining group 23:16:52 policy-pap | ssl.engine.factory.class = null 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.267130861Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.480023ms 23:16:52 kafka | [2024-04-18 23:14:54,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.643+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Request joining group due to: need to re-join with the given member-id: consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2-b078aa9d-088f-42bc-9dfa-8245e5a83776 23:16:52 policy-pap | ssl.key.password = null 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.27018991Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 23:16:52 kafka | [2024-04-18 23:14:54,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.644+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:52 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:52 policy-db-migrator | > upgrade 0570-toscadatatype.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.271014726Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=824.616µs 23:16:52 kafka | [2024-04-18 23:14:54,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:14:56.644+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] (Re-)joining group 23:16:52 policy-pap | ssl.keystore.certificate.chain = null 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.274949314Z level=info msg="Executing migration" id="Update alert table charset" 23:16:52 kafka | [2024-04-18 23:14:54,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:14:57.082+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 23:16:52 policy-pap | ssl.keystore.key = null 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.274979666Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=31.532µs 23:16:52 kafka | [2024-04-18 23:14:54,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:14:57.082+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 23:16:52 policy-pap | ssl.keystore.location = null 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.279260413Z level=info msg="Executing migration" id="Update alert_notification table charset" 23:16:52 kafka | [2024-04-18 23:14:54,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:14:59.649+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Successfully joined group with generation Generation{generationId=1, memberId='consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2-b078aa9d-088f-42bc-9dfa-8245e5a83776', protocol='range'} 23:16:52 policy-pap | ssl.keystore.password = null 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.279285854Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=26.621µs 23:16:52 kafka | [2024-04-18 23:14:54,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:14:59.659+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Finished assignment for group at generation 1: {consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2-b078aa9d-088f-42bc-9dfa-8245e5a83776=Assignment(partitions=[policy-pdp-pap-0])} 23:16:52 policy-pap | ssl.keystore.type = JKS 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.282678042Z level=info msg="Executing migration" id="create notification_journal table v1" 23:16:52 kafka | [2024-04-18 23:14:54,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:54,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-pap | ssl.protocol = TLSv1.3 23:16:52 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.283400132Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=721.75µs 23:16:52 kafka | [2024-04-18 23:14:54,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:14:59.667+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Successfully synced group in generation Generation{generationId=1, memberId='consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2-b078aa9d-088f-42bc-9dfa-8245e5a83776', protocol='range'} 23:16:52 policy-pap | ssl.provider = null 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.286844793Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 23:16:52 kafka | [2024-04-18 23:14:54,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:14:59.668+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:52 policy-pap | ssl.secure.random.implementation = null 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.287773855Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=929.282µs 23:16:52 kafka | [2024-04-18 23:14:54,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:14:59.671+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Adding newly assigned partitions: policy-pdp-pap-0 23:16:52 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:54,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.294410993Z level=info msg="Executing migration" id="drop alert_notification_journal" 23:16:52 policy-apex-pdp | [2024-04-18T23:14:59.678+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Found no committed offset for partition policy-pdp-pap-0 23:16:52 policy-pap | ssl.truststore.certificates = null 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:54,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.295315763Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=911.201µs 23:16:52 policy-apex-pdp | [2024-04-18T23:14:59.686+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2, groupId=dbe3acf0-ba50-4571-9b48-e58d24ad2dc5] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:52 policy-pap | ssl.truststore.location = null 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:54,845] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.299691956Z level=info msg="Executing migration" id="create alert_notification_state table v1" 23:16:52 policy-pap | ssl.truststore.password = null 23:16:52 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 23:16:52 kafka | [2024-04-18 23:14:54,845] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.300376393Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=680.788µs 23:16:52 policy-apex-pdp | [2024-04-18T23:15:16.293+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:52 policy-pap | ssl.truststore.type = JKS 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:54,845] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.302859431Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 23:16:52 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a80741ec-ec1d-4c24-9792-e262aa00f81d","timestampMs":1713482116293,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup"} 23:16:52 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:52 kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.303497346Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=638.485µs 23:16:52 policy-apex-pdp | [2024-04-18T23:15:16.321+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:52 policy-pap | 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.30626048Z level=info msg="Executing migration" id="Add for to alert table" 23:16:52 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a80741ec-ec1d-4c24-9792-e262aa00f81d","timestampMs":1713482116293,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup"} 23:16:52 policy-pap | [2024-04-18T23:14:54.286+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.309123988Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=2.863009ms 23:16:52 policy-apex-pdp | [2024-04-18T23:15:16.323+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:52 kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.286+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.313230016Z level=info msg="Executing migration" id="Add column uid in alert_notification" 23:16:52 policy-apex-pdp | [2024-04-18T23:15:16.517+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:52 kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.286+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713482094286 23:16:52 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.31600368Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=2.775064ms 23:16:52 policy-apex-pdp | {"source":"pap-fa93d91d-c9fa-4126-a299-649d686bbaea","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c25a74a6-c62e-4253-9df7-3b45bb0657f1","timestampMs":1713482116438,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:52 policy-pap | [2024-04-18T23:14:54.286+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Subscribed to topic(s): policy-pdp-pap 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.319357936Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 23:16:52 kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:15:16.533+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 23:16:52 policy-pap | [2024-04-18T23:14:54.286+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.319496743Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=138.718µs 23:16:52 kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:15:16.533+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 23:16:52 policy-pap | [2024-04-18T23:14:54.286+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=dcc58c8f-b414-44a2-8a46-354bb82b65f7, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@22e95960 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.321282732Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 23:16:52 kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"7d0c4e03-b564-45d4-9123-02a767667124","timestampMs":1713482116533,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup"} 23:16:52 policy-pap | [2024-04-18T23:14:54.287+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=dcc58c8f-b414-44a2-8a46-354bb82b65f7, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.321934478Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=652.406µs 23:16:52 kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:15:16.535+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:52 policy-pap | [2024-04-18T23:14:54.287+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.326090419Z level=info msg="Executing migration" id="Remove unique index org_id_name" 23:16:52 kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c25a74a6-c62e-4253-9df7-3b45bb0657f1","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"88e33828-f496-4f68-bdce-d4632c78eedd","timestampMs":1713482116535,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:52 policy-pap | allow.auto.create.topics = true 23:16:52 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.326644519Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=554.39µs 23:16:52 kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:15:16.547+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:52 policy-pap | auto.commit.interval.ms = 5000 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.328448959Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 23:16:52 kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"7d0c4e03-b564-45d4-9123-02a767667124","timestampMs":1713482116533,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup"} 23:16:52 policy-pap | auto.include.jmx.reporter = true 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.331052834Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=2.601855ms 23:16:52 kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:15:16.547+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:52 policy-pap | auto.offset.reset = latest 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.33369005Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 23:16:52 kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:15:16.551+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:52 policy-pap | bootstrap.servers = [kafka:9092] 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.333738753Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=48.683µs 23:16:52 kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-pap | check.crcs = true 23:16:52 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c25a74a6-c62e-4253-9df7-3b45bb0657f1","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"88e33828-f496-4f68-bdce-d4632c78eedd","timestampMs":1713482116535,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.338232532Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 23:16:52 kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:52 policy-apex-pdp | [2024-04-18T23:15:16.552+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:52 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.339623189Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.391397ms 23:16:52 kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-pap | client.id = consumer-policy-pap-4 23:16:52 policy-apex-pdp | [2024-04-18T23:15:16.585+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.342459106Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 23:16:52 kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-pap | client.rack = 23:16:52 policy-apex-pdp | {"source":"pap-fa93d91d-c9fa-4126-a299-649d686bbaea","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"35f314d5-4e39-45ee-b5dd-8c7ab9415862","timestampMs":1713482116438,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.343863754Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.403838ms 23:16:52 kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-pap | connections.max.idle.ms = 540000 23:16:52 policy-apex-pdp | [2024-04-18T23:15:16.590+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.346734773Z level=info msg="Executing migration" id="Drop old annotation table v4" 23:16:52 kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-pap | default.api.timeout.ms = 60000 23:16:52 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"35f314d5-4e39-45ee-b5dd-8c7ab9415862","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"aff58f06-2f5f-434f-a796-4eda85435870","timestampMs":1713482116589,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.346823938Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=89.325µs 23:16:52 kafka | [2024-04-18 23:14:55,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-pap | enable.auto.commit = true 23:16:52 policy-apex-pdp | [2024-04-18T23:15:16.598+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.350915675Z level=info msg="Executing migration" id="create annotation table v5" 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-pap | exclude.internal.topics = true 23:16:52 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"35f314d5-4e39-45ee-b5dd-8c7ab9415862","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"aff58f06-2f5f-434f-a796-4eda85435870","timestampMs":1713482116589,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:52 policy-db-migrator | > upgrade 0630-toscanodetype.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.351880408Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=965.043µs 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-pap | fetch.max.bytes = 52428800 23:16:52 policy-apex-pdp | [2024-04-18T23:15:16.600+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.354785629Z level=info msg="Executing migration" id="add index annotation 0 v3" 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-pap | fetch.max.wait.ms = 500 23:16:52 policy-apex-pdp | [2024-04-18T23:15:16.612+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.355679089Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=892.89µs 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-pap | fetch.min.bytes = 1 23:16:52 policy-apex-pdp | {"source":"pap-fa93d91d-c9fa-4126-a299-649d686bbaea","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"46108e0e-87ab-4235-a93e-b62b8e791b82","timestampMs":1713482116585,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.358294764Z level=info msg="Executing migration" id="add index annotation 1 v3" 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-pap | group.id = policy-pap 23:16:52 policy-apex-pdp | [2024-04-18T23:15:16.613+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.359153001Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=857.847µs 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-pap | group.instance.id = null 23:16:52 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"46108e0e-87ab-4235-a93e-b62b8e791b82","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"27a5c9bf-e2d6-4122-aa3f-c57320e3422f","timestampMs":1713482116613,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.363395987Z level=info msg="Executing migration" id="add index annotation 2 v3" 23:16:52 policy-pap | heartbeat.interval.ms = 3000 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:15:16.624+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:52 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.364278245Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=881.679µs 23:16:52 policy-pap | interceptor.classes = [] 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"46108e0e-87ab-4235-a93e-b62b8e791b82","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"27a5c9bf-e2d6-4122-aa3f-c57320e3422f","timestampMs":1713482116613,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.372090278Z level=info msg="Executing migration" id="add index annotation 3 v3" 23:16:52 policy-pap | internal.leave.group.on.close = true 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:15:16.624+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.373502057Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.417029ms 23:16:52 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-apex-pdp | [2024-04-18T23:15:56.162+00:00|INFO|RequestLog|qtp1863100050-32] 172.17.0.3 - policyadmin [18/Apr/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10649 "-" "Prometheus/2.51.2" 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.376584528Z level=info msg="Executing migration" id="add index annotation 4 v3" 23:16:52 policy-pap | isolation.level = read_uncommitted 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.378682504Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=2.099396ms 23:16:52 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.382888537Z level=info msg="Executing migration" id="Update annotation table charset" 23:16:52 policy-pap | max.partition.fetch.bytes = 1048576 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.382968841Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=81.024µs 23:16:52 policy-pap | max.poll.interval.ms = 300000 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.385493791Z level=info msg="Executing migration" id="Add column region_id to annotation table" 23:16:52 policy-pap | max.poll.records = 500 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.391264481Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=5.76973ms 23:16:52 policy-pap | metadata.max.age.ms = 300000 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.395286254Z level=info msg="Executing migration" id="Drop category_id index" 23:16:52 policy-pap | metric.reporters = [] 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.396146022Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=860.708µs 23:16:52 policy-pap | metrics.num.samples = 2 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.399876239Z level=info msg="Executing migration" id="Add column tags to annotation table" 23:16:52 policy-pap | metrics.recording.level = INFO 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-db-migrator | > upgrade 0660-toscaparameter.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.404025659Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.14868ms 23:16:52 policy-pap | metrics.sample.window.ms = 30000 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.409728875Z level=info msg="Executing migration" id="Create annotation_tag table v2" 23:16:52 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.410875168Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=1.151644ms 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:52 policy-pap | receive.buffer.bytes = 65536 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.415012358Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | reconnect.backoff.max.ms = 1000 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.415898887Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=886.519µs 23:16:52 policy-db-migrator | 23:16:52 policy-pap | reconnect.backoff.ms = 50 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.420523583Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 23:16:52 policy-db-migrator | 23:16:52 policy-pap | request.timeout.ms = 30000 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.421353309Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=829.666µs 23:16:52 policy-db-migrator | > upgrade 0670-toscapolicies.sql 23:16:52 policy-pap | retry.backoff.ms = 100 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | sasl.client.callback.handler.class = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.424336704Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 23:16:52 policy-pap | sasl.jaas.config = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.435866894Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=11.526829ms 23:16:52 kafka | [2024-04-18 23:14:55,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.439178757Z level=info msg="Executing migration" id="Create annotation_tag table v3" 23:16:52 kafka | [2024-04-18 23:14:55,008] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.43995272Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=777.013µs 23:16:52 kafka | [2024-04-18 23:14:55,008] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | sasl.kerberos.service.name = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.444232437Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 23:16:52 kafka | [2024-04-18 23:14:55,008] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 23:16:52 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.445281215Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.048208ms 23:16:52 kafka | [2024-04-18 23:14:55,008] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.44843588Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:52 kafka | [2024-04-18 23:14:55,010] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 23:16:52 policy-pap | sasl.login.callback.handler.class = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.449082526Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=644.976µs 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,010] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 23:16:52 policy-pap | sasl.login.class = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.453054266Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,010] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 23:16:52 policy-pap | sasl.login.connect.timeout.ms = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.453676741Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=627.395µs 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,010] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 23:16:52 policy-pap | sasl.login.read.timeout.ms = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.456574261Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 23:16:52 policy-db-migrator | > upgrade 0690-toscapolicy.sql 23:16:52 kafka | [2024-04-18 23:14:55,010] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 23:16:52 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.456847257Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=273.685µs 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,010] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 23:16:52 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.464071257Z level=info msg="Executing migration" id="Add created time to annotation table" 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 23:16:52 kafka | [2024-04-18 23:14:55,010] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 23:16:52 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.469057253Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.986976ms 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,010] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 23:16:52 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.472378107Z level=info msg="Executing migration" id="Add updated time to annotation table" 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 23:16:52 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.476357148Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.976581ms 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 23:16:52 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.480346489Z level=info msg="Executing migration" id="Add index for created in annotation table" 23:16:52 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 23:16:52 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 23:16:52 policy-pap | sasl.mechanism = GSSAPI 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.481213577Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=867.268µs 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 23:16:52 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.486999488Z level=info msg="Executing migration" id="Add index for updated in annotation table" 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 23:16:52 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 23:16:52 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.487846505Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=847.037µs 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 23:16:52 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.491792874Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 23:16:52 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.492033527Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=238.694µs 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 23:16:52 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.495607765Z level=info msg="Executing migration" id="Add epoch_end column" 23:16:52 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 23:16:52 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 23:16:52 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.499657799Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.047624ms 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 23:16:52 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.504616624Z level=info msg="Executing migration" id="Add index for epoch_end" 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 23:16:52 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 23:16:52 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.505484502Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=867.808µs 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:52 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.509427071Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 23:16:52 policy-db-migrator | 23:16:52 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:52 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.509709457Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=283.215µs 23:16:52 policy-db-migrator | 23:16:52 policy-pap | security.protocol = PLAINTEXT 23:16:52 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.512721804Z level=info msg="Executing migration" id="Move region to single row" 23:16:52 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 23:16:52 policy-pap | security.providers = null 23:16:52 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.513312996Z level=info msg="Migration successfully executed" id="Move region to single row" duration=591.133µs 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | send.buffer.bytes = 131072 23:16:52 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.516522444Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:52 policy-pap | session.timeout.ms = 45000 23:16:52 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.517932332Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.401008ms 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:52 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.522882747Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 23:16:52 policy-db-migrator | 23:16:52 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:52 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.523676171Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=793.704µs 23:16:52 policy-db-migrator | 23:16:52 policy-pap | ssl.cipher.suites = null 23:16:52 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.526448644Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 23:16:52 policy-db-migrator | > upgrade 0730-toscaproperty.sql 23:16:52 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:52 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.527294601Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=845.577µs 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:52 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.532219554Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:52 policy-pap | ssl.engine.factory.class = null 23:16:52 kafka | [2024-04-18 23:14:55,011] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.533713697Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.494093ms 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | ssl.key.password = null 23:16:52 kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.537838426Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 23:16:52 kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:52 kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | ssl.keystore.certificate.chain = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.538749066Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=915.651µs 23:16:52 kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 23:16:52 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 23:16:52 policy-pap | ssl.keystore.key = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.546507576Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 23:16:52 kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | ssl.keystore.location = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.547404666Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=896.85µs 23:16:52 kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 23:16:52 policy-pap | ssl.keystore.password = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.550695328Z level=info msg="Executing migration" id="Increase tags column to length 4096" 23:16:52 kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | ssl.keystore.type = JKS 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.550820615Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=125.667µs 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 23:16:52 policy-pap | ssl.protocol = TLSv1.3 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.554452867Z level=info msg="Executing migration" id="create test_data table" 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 23:16:52 policy-pap | ssl.provider = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.55576987Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.316713ms 23:16:52 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 23:16:52 kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 23:16:52 policy-pap | ssl.secure.random.implementation = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.559837825Z level=info msg="Executing migration" id="create dashboard_version table v1" 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 23:16:52 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.560878763Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.040558ms 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 23:16:52 kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 23:16:52 policy-pap | ssl.truststore.certificates = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.563835967Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 23:16:52 policy-pap | ssl.truststore.location = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.5651581Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.321633ms 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 23:16:52 policy-pap | ssl.truststore.password = null 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.568350487Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 23:16:52 kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 23:16:52 policy-pap | ssl.truststore.type = JKS 23:16:52 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.569347992Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=994.405µs 23:16:52 kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 23:16:52 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.574281136Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 23:16:52 kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 23:16:52 policy-pap | 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.574470206Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=189.66µs 23:16:52 kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.291+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.579425001Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 23:16:52 kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.291+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.580251257Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=825.476µs 23:16:52 kafka | [2024-04-18 23:14:55,012] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.291+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713482094291 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.583877868Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 23:16:52 kafka | [2024-04-18 23:14:55,013] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.292+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:52 policy-db-migrator | > upgrade 0770-toscarequirement.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.583952302Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=75.174µs 23:16:52 kafka | [2024-04-18 23:14:55,016] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.292+00:00|INFO|ServiceManager|main] Policy PAP starting topics 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.587048613Z level=info msg="Executing migration" id="create team table" 23:16:52 kafka | [2024-04-18 23:14:55,019] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 23:16:52 policy-pap | [2024-04-18T23:14:54.292+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=dcc58c8f-b414-44a2-8a46-354bb82b65f7, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.587862929Z level=info msg="Migration successfully executed" id="create team table" duration=815.705µs 23:16:52 kafka | [2024-04-18 23:14:55,021] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:14:54.292+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=deefd98f-1600-442c-a15a-d2ceba267151, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.591616327Z level=info msg="Executing migration" id="add index team.org_id" 23:16:52 kafka | [2024-04-18 23:14:55,021] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:14:54.292+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a066a90b-9103-4d76-8165-c5999a0e1887, alive=false, publisher=null]]: starting 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.592641703Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.024816ms 23:16:52 kafka | [2024-04-18 23:14:55,021] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:14:54.310+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.595808529Z level=info msg="Executing migration" id="add unique index team_org_id_name" 23:16:52 kafka | [2024-04-18 23:14:55,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | > upgrade 0780-toscarequirements.sql 23:16:52 policy-pap | acks = -1 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.596967433Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.154474ms 23:16:52 kafka | [2024-04-18 23:14:55,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | auto.include.jmx.reporter = true 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.601042879Z level=info msg="Executing migration" id="Add column uid in team" 23:16:52 kafka | [2024-04-18 23:14:55,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 23:16:52 policy-pap | batch.size = 16384 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.608797159Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=7.75423ms 23:16:52 kafka | [2024-04-18 23:14:55,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | bootstrap.servers = [kafka:9092] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.612040629Z level=info msg="Executing migration" id="Update uid column values in team" 23:16:52 kafka | [2024-04-18 23:14:55,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | buffer.memory = 33554432 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.612216318Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=175.659µs 23:16:52 kafka | [2024-04-18 23:14:55,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.615107749Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 23:16:52 kafka | [2024-04-18 23:14:55,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 23:16:52 policy-pap | client.id = producer-1 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.615997108Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=889.789µs 23:16:52 kafka | [2024-04-18 23:14:55,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | compression.type = none 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.61927616Z level=info msg="Executing migration" id="create team member table" 23:16:52 kafka | [2024-04-18 23:14:55,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:52 policy-pap | connections.max.idle.ms = 540000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.620513748Z level=info msg="Migration successfully executed" id="create team member table" duration=1.236978ms 23:16:52 kafka | [2024-04-18 23:14:55,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | delivery.timeout.ms = 120000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.625126424Z level=info msg="Executing migration" id="add index team_member.org_id" 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-pap | enable.idempotence = true 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.626569174Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.44455ms 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-pap | interceptor.classes = [] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.630710874Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 23:16:52 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 23:16:52 kafka | [2024-04-18 23:14:55,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.632557736Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.846493ms 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,023] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-pap | linger.ms = 0 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.63659858Z level=info msg="Executing migration" id="add index team_member.team_id" 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 23:16:52 kafka | [2024-04-18 23:14:55,024] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:16:52 policy-pap | max.block.ms = 60000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.637772015Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.172875ms 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-pap | max.in.flight.requests.per.connection = 5 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.642409242Z level=info msg="Executing migration" id="Add column email to team table" 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-pap | max.request.size = 1048576 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.646296728Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=3.887795ms 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-pap | metadata.max.age.ms = 300000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.649365188Z level=info msg="Executing migration" id="Add column external to team_member table" 23:16:52 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 23:16:52 kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-pap | metadata.max.idle.ms = 300000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.653085484Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.719446ms 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-pap | metric.reporters = [] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.656296842Z level=info msg="Executing migration" id="Add column permission to team_member table" 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:52 kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-pap | metrics.num.samples = 2 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.661003393Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.709071ms 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-pap | metrics.recording.level = INFO 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.665422268Z level=info msg="Executing migration" id="create dashboard acl table" 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-pap | metrics.sample.window.ms = 30000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.666434354Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.011776ms 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.669873884Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 23:16:52 kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | > upgrade 0820-toscatrigger.sql 23:16:52 policy-pap | partitioner.availability.timeout.ms = 0 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.671036189Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.158424ms 23:16:52 kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | partitioner.class = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.675482415Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 23:16:52 kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:52 policy-pap | partitioner.ignore.keys = false 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.677090364Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.608259ms 23:16:52 kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | receive.buffer.bytes = 32768 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.68223927Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 23:16:52 kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | reconnect.backoff.max.ms = 1000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.683210754Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=971.324µs 23:16:52 kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | reconnect.backoff.ms = 50 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.687361054Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 23:16:52 kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:52 policy-pap | request.timeout.ms = 30000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.689086459Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.725576ms 23:16:52 kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | retries = 2147483647 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.693642312Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 23:16:52 kafka | [2024-04-18 23:14:55,024] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 23:16:52 policy-pap | retry.backoff.ms = 100 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.695566989Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.930587ms 23:16:52 kafka | [2024-04-18 23:14:55,025] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | sasl.client.callback.handler.class = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.698873942Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 23:16:52 policy-db-migrator | 23:16:52 policy-pap | sasl.jaas.config = null 23:16:52 kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.699538109Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=660.836µs 23:16:52 policy-db-migrator | 23:16:52 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:52 kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.702133703Z level=info msg="Executing migration" id="add index dashboard_permission" 23:16:52 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 23:16:52 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:52 kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.702820321Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=686.599µs 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | sasl.kerberos.service.name = null 23:16:52 kafka | [2024-04-18 23:14:55,026] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.707125929Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 23:16:52 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 23:16:52 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:52 kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.707638438Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=512.699µs 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.710415722Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 23:16:52 kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | sasl.login.callback.handler.class = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.710667956Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=252.374µs 23:16:52 kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 23:16:52 policy-pap | sasl.login.class = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.714471556Z level=info msg="Executing migration" id="create tag table" 23:16:52 kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | sasl.login.connect.timeout.ms = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.715215258Z level=info msg="Migration successfully executed" id="create tag table" duration=743.221µs 23:16:52 kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 23:16:52 policy-pap | sasl.login.read.timeout.ms = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.718225775Z level=info msg="Executing migration" id="add index tag.key_value" 23:16:52 kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.719168397Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=941.763µs 23:16:52 kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.722282779Z level=info msg="Executing migration" id="create login attempt table" 23:16:52 kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.723001969Z level=info msg="Migration successfully executed" id="create login attempt table" duration=719.04µs 23:16:52 kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:52 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.726945848Z level=info msg="Executing migration" id="add index login_attempt.username" 23:16:52 kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.727854948Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=911.78µs 23:16:52 kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.73094693Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 23:16:52 kafka | [2024-04-18 23:14:55,026] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.731907693Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=960.634µs 23:16:52 kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-pap | sasl.mechanism = GSSAPI 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.734646855Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 23:16:52 kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.747832176Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=13.18453ms 23:16:52 kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:52 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.75170945Z level=info msg="Executing migration" id="create login_attempt v2" 23:16:52 kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.753084757Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.375256ms 23:16:52 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:52 kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.756479335Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 23:16:52 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:52 kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.758096345Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.616219ms 23:16:52 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:52 kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.761208107Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 23:16:52 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:52 kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.761500193Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=295.406µs 23:16:52 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:52 kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.765091512Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 23:16:52 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:52 kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.765722387Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=627.735µs 23:16:52 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:52 kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.768577645Z level=info msg="Executing migration" id="create user auth table" 23:16:52 policy-pap | security.protocol = PLAINTEXT 23:16:52 kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.769361129Z level=info msg="Migration successfully executed" id="create user auth table" duration=783.364µs 23:16:52 policy-pap | security.providers = null 23:16:52 kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.772344654Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 23:16:52 policy-pap | send.buffer.bytes = 131072 23:16:52 kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.773288257Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=943.242µs 23:16:52 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:52 kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.777107548Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 23:16:52 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:52 kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.777177212Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=69.644µs 23:16:52 policy-pap | ssl.cipher.suites = null 23:16:52 kafka | [2024-04-18 23:14:55,027] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:52 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.779877632Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 23:16:52 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:52 kafka | [2024-04-18 23:14:55,027] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.784932152Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.05367ms 23:16:52 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:52 kafka | [2024-04-18 23:14:55,027] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.787729407Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 23:16:52 policy-pap | ssl.engine.factory.class = null 23:16:52 kafka | [2024-04-18 23:14:55,028] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.792845781Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.115063ms 23:16:52 policy-pap | ssl.key.password = null 23:16:52 kafka | [2024-04-18 23:14:55,028] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.796668553Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 23:16:52 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.801725763Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.055691ms 23:16:52 policy-pap | ssl.keystore.certificate.chain = null 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.805440909Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 23:16:52 policy-pap | ssl.keystore.key = null 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.810467747Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.025628ms 23:16:52 policy-pap | ssl.keystore.location = null 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.813520747Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 23:16:52 policy-pap | ssl.keystore.password = null 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.81448809Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=967.364µs 23:16:52 policy-pap | ssl.keystore.type = JKS 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.819116317Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 23:16:52 policy-pap | ssl.protocol = TLSv1.3 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.82548004Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=6.361132ms 23:16:52 policy-pap | ssl.provider = null 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.830326008Z level=info msg="Executing migration" id="create server_lock table" 23:16:52 policy-pap | ssl.secure.random.implementation = null 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.831201107Z level=info msg="Migration successfully executed" id="create server_lock table" duration=874.909µs 23:16:52 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.834247125Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 23:16:52 policy-pap | ssl.truststore.certificates = null 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.835481684Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.234039ms 23:16:52 policy-pap | ssl.truststore.location = null 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.840138432Z level=info msg="Executing migration" id="create user auth token table" 23:16:52 policy-pap | ssl.truststore.password = null 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.841060163Z level=info msg="Migration successfully executed" id="create user auth token table" duration=921.281µs 23:16:52 policy-pap | ssl.truststore.type = JKS 23:16:52 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.845800916Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 23:16:52 policy-pap | transaction.timeout.ms = 60000 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.846709696Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=908.29µs 23:16:52 policy-pap | transactional.id = null 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.849620068Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 23:16:52 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.850538519Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=919.121µs 23:16:52 policy-pap | 23:16:52 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.853552156Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 23:16:52 policy-pap | [2024-04-18T23:14:54.321+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.854593593Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.037808ms 23:16:52 policy-pap | [2024-04-18T23:14:54.337+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:52 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.337+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.859547858Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.337+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713482094337 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.865441085Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.890216ms 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.338+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a066a90b-9103-4d76-8165-c5999a0e1887, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.871647919Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.338+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=192fff36-bd6d-4ee3-9df3-262c724178bf, alive=false, publisher=null]]: starting 23:16:52 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.872604552Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=956.953µs 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.339+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.877054898Z level=info msg="Executing migration" id="create cache_data table" 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-pap | acks = -1 23:16:52 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.878506289Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.450501ms 23:16:52 kafka | [2024-04-18 23:14:55,029] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-pap | auto.include.jmx.reporter = true 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.883072682Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 23:16:52 kafka | [2024-04-18 23:14:55,030] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 23:16:52 policy-pap | batch.size = 16384 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.88467001Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.598838ms 23:16:52 kafka | [2024-04-18 23:14:55,030] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:52 policy-pap | bootstrap.servers = [kafka:9092] 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.892292763Z level=info msg="Executing migration" id="create short_url table v1" 23:16:52 kafka | [2024-04-18 23:14:55,067] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:52 policy-pap | buffer.memory = 33554432 23:16:52 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.893257366Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=964.303µs 23:16:52 kafka | [2024-04-18 23:14:55,067] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:52 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.897389655Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 23:16:52 kafka | [2024-04-18 23:14:55,067] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:52 policy-pap | client.id = producer-2 23:16:52 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.899062998Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.673643ms 23:16:52 kafka | [2024-04-18 23:14:55,067] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:52 policy-pap | compression.type = none 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.902403003Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 23:16:52 kafka | [2024-04-18 23:14:55,067] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:52 policy-pap | connections.max.idle.ms = 540000 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.90251771Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=117.856µs 23:16:52 kafka | [2024-04-18 23:14:55,067] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:52 policy-pap | delivery.timeout.ms = 120000 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.907376759Z level=info msg="Executing migration" id="delete alert_definition table" 23:16:52 kafka | [2024-04-18 23:14:55,067] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:52 policy-pap | enable.idempotence = true 23:16:52 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.907472804Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=96.635µs 23:16:52 kafka | [2024-04-18 23:14:55,067] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:52 policy-pap | interceptor.classes = [] 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.915045834Z level=info msg="Executing migration" id="recreate alert_definition table" 23:16:52 kafka | [2024-04-18 23:14:55,067] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:52 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:52 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.916536587Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.490173ms 23:16:52 kafka | [2024-04-18 23:14:55,067] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:52 policy-pap | linger.ms = 0 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.921479841Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:52 policy-pap | max.block.ms = 60000 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.922781483Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.301252ms 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:52 policy-pap | max.in.flight.requests.per.connection = 5 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.92597495Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:52 policy-pap | max.request.size = 1048576 23:16:52 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.926968545Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=993.235µs 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:52 policy-pap | metadata.max.age.ms = 300000 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.930206684Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:52 policy-pap | metadata.max.idle.ms = 300000 23:16:52 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.930406196Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=206.891µs 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:52 policy-pap | metric.reporters = [] 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.934349724Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:52 policy-pap | metrics.num.samples = 2 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.935858438Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.508994ms 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:52 policy-pap | metrics.recording.level = INFO 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.939799306Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:52 policy-pap | metrics.sample.window.ms = 30000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.940681995Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=882.719µs 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:52 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.949757338Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.951301504Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.544566ms 23:16:52 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:52 policy-pap | partitioner.availability.timeout.ms = 0 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.957068793Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | partitioner.class = null 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.958615049Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.546606ms 23:16:52 policy-db-migrator | 23:16:52 policy-pap | partitioner.ignore.keys = false 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.961621206Z level=info msg="Executing migration" id="Add column paused in alert_definition" 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | receive.buffer.bytes = 32768 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.967501332Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.879395ms 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:52 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:52 policy-pap | reconnect.backoff.max.ms = 1000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.971778299Z level=info msg="Executing migration" id="drop alert_definition table" 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | reconnect.backoff.ms = 50 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.972812886Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.033997ms 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:52 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.978212345Z level=info msg="Executing migration" id="delete alert_definition_version table" 23:16:52 policy-pap | request.timeout.ms = 30000 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.97848456Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=271.655µs 23:16:52 policy-pap | retries = 2147483647 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.981711599Z level=info msg="Executing migration" id="recreate alert_definition_version table" 23:16:52 policy-pap | retry.backoff.ms = 100 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.982744937Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.032727ms 23:16:52 policy-pap | sasl.client.callback.handler.class = null 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:52 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.990594542Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 23:16:52 policy-pap | sasl.jaas.config = null 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.991696363Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.101101ms 23:16:52 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:52 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:23.999476964Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 23:16:52 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.001360068Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.881904ms 23:16:52 policy-pap | sasl.kerberos.service.name = null 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.004748266Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 23:16:52 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.005093755Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=344.399µs 23:16:52 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:52 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.008952309Z level=info msg="Executing migration" id="drop alert_definition_version table" 23:16:52 policy-pap | sasl.login.callback.handler.class = null 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.009922652Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=969.763µs 23:16:52 policy-pap | sasl.login.class = null 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:52 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:52 policy-pap | sasl.login.connect.timeout.ms = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.01439396Z level=info msg="Executing migration" id="create alert_instance table" 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | sasl.login.read.timeout.ms = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.015383154Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=988.414µs 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.0185582Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.019837721Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.279491ms 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:52 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:52 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.024260005Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.026562033Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=2.303588ms 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:52 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:52 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.029992873Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.036082529Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.088497ms 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | sasl.mechanism = GSSAPI 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.045184133Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.046466034Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.285791ms 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:52 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:52 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.05344973Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.05453675Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.093461ms 23:16:52 kafka | [2024-04-18 23:14:55,068] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:52 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:52 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.059372228Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 23:16:52 kafka | [2024-04-18 23:14:55,070] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.081516822Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=22.143305ms 23:16:52 kafka | [2024-04-18 23:14:55,070] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.085695343Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 23:16:52 kafka | [2024-04-18 23:14:55,124] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.112348978Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=26.646604ms 23:16:52 kafka | [2024-04-18 23:14:55,137] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 23:16:52 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.151754497Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,138] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 23:16:52 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.153547196Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.793649ms 23:16:52 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:52 kafka | [2024-04-18 23:14:55,142] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.163748041Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,143] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-pap | security.protocol = PLAINTEXT 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.165400492Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.653122ms 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,157] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-pap | security.providers = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.17187275Z level=info msg="Executing migration" id="add current_reason column related to current_state" 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,158] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-pap | send.buffer.bytes = 131072 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.177553734Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.680794ms 23:16:52 kafka | [2024-04-18 23:14:55,158] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 23:16:52 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.180935581Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 23:16:52 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:52 kafka | [2024-04-18 23:14:55,158] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.186494019Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.557897ms 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,158] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-pap | ssl.cipher.suites = null 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.19265887Z level=info msg="Executing migration" id="create alert_rule table" 23:16:52 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:52 kafka | [2024-04-18 23:14:55,166] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,166] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.194238887Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.583938ms 23:16:52 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,166] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.202921847Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 23:16:52 policy-pap | ssl.engine.factory.class = null 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,166] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.206976652Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=4.053994ms 23:16:52 policy-pap | ssl.key.password = null 23:16:52 policy-db-migrator | > upgrade 0100-pdp.sql 23:16:52 kafka | [2024-04-18 23:14:55,166] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.211985599Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 23:16:52 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:52 kafka | [2024-04-18 23:14:55,173] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.213124272Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.139783ms 23:16:52 policy-pap | ssl.keystore.certificate.chain = null 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,174] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.220718522Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 23:16:52 policy-pap | ssl.keystore.key = null 23:16:52 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 23:16:52 kafka | [2024-04-18 23:14:55,174] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.222100078Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.383207ms 23:16:52 policy-pap | ssl.keystore.location = null 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,174] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.231915001Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 23:16:52 policy-pap | ssl.keystore.password = null 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,174] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.232031937Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=121.036µs 23:16:52 policy-pap | ssl.keystore.type = JKS 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,180] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.238505735Z level=info msg="Executing migration" id="add column for to alert_rule" 23:16:52 policy-pap | ssl.protocol = TLSv1.3 23:16:52 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:52 kafka | [2024-04-18 23:14:55,191] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.245308822Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.799186ms 23:16:52 policy-pap | ssl.provider = null 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,191] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.25305258Z level=info msg="Executing migration" id="add column annotations to alert_rule" 23:16:52 policy-pap | ssl.secure.random.implementation = null 23:16:52 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:52 kafka | [2024-04-18 23:14:55,191] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.259807594Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.749713ms 23:16:52 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:52 kafka | [2024-04-18 23:14:55,191] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.270284513Z level=info msg="Executing migration" id="add column labels to alert_rule" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | ssl.truststore.certificates = null 23:16:52 kafka | [2024-04-18 23:14:55,201] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.276760341Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.466148ms 23:16:52 policy-db-migrator | 23:16:52 policy-pap | ssl.truststore.location = null 23:16:52 kafka | [2024-04-18 23:14:55,201] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.280472227Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 23:16:52 policy-db-migrator | 23:16:52 policy-pap | ssl.truststore.password = null 23:16:52 kafka | [2024-04-18 23:14:55,201] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.281905686Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.43873ms 23:16:52 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 23:16:52 policy-pap | ssl.truststore.type = JKS 23:16:52 kafka | [2024-04-18 23:14:55,201] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.286265937Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | transaction.timeout.ms = 60000 23:16:52 kafka | [2024-04-18 23:14:55,201] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.287299594Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.033257ms 23:16:52 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:52 policy-pap | transactional.id = null 23:16:52 kafka | [2024-04-18 23:14:55,214] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.290882612Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:52 kafka | [2024-04-18 23:14:55,217] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.298297943Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=7.41475ms 23:16:52 policy-db-migrator | 23:16:52 policy-pap | 23:16:52 kafka | [2024-04-18 23:14:55,218] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.306427212Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:14:54.339+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 23:16:52 kafka | [2024-04-18 23:14:55,219] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.312729561Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.301579ms 23:16:52 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 23:16:52 policy-pap | [2024-04-18T23:14:54.342+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:52 kafka | [2024-04-18 23:14:55,219] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:14:54.342+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.316717321Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 23:16:52 kafka | [2024-04-18 23:14:55,227] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 23:16:52 policy-pap | [2024-04-18T23:14:54.342+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713482094342 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.317831753Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.113422ms 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,228] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-pap | [2024-04-18T23:14:54.342+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=192fff36-bd6d-4ee3-9df3-262c724178bf, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.32138884Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,228] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:14:54.342+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.325778632Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=4.389402ms 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,228] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:14:54.342+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.329796265Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 23:16:52 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 23:16:52 kafka | [2024-04-18 23:14:55,228] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.344+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.33405273Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.255745ms 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,237] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-pap | [2024-04-18T23:14:54.344+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.338720048Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 23:16:52 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 23:16:52 kafka | [2024-04-18 23:14:55,238] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-pap | [2024-04-18T23:14:54.347+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.338791142Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=65.774µs 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,238] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:14:54.348+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.342983164Z level=info msg="Executing migration" id="create alert_rule_version table" 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,238] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:14:54.348+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.344114257Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.131102ms 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,239] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.349+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.348750983Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 23:16:52 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 23:16:52 kafka | [2024-04-18 23:14:55,245] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-pap | [2024-04-18T23:14:54.351+00:00|INFO|TimerManager|Thread-9] timer manager update started 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.349866625Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.115262ms 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,245] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-pap | [2024-04-18T23:14:54.351+00:00|INFO|ServiceManager|main] Policy PAP started 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.353864936Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,245] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:14:54.353+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.125 seconds (process running for 10.732) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.354969237Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.102991ms 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,245] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:14:54.357+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.359299677Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 23:16:52 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 23:16:52 kafka | [2024-04-18 23:14:55,245] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.762+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: 3CcxO9QMSqWFRVbl82UfdQ 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.35936664Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=67.394µs 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,254] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-pap | [2024-04-18T23:14:54.762+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 3CcxO9QMSqWFRVbl82UfdQ 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.362632401Z level=info msg="Executing migration" id="add column for to alert_rule_version" 23:16:52 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 23:16:52 kafka | [2024-04-18 23:14:55,255] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-pap | [2024-04-18T23:14:54.761+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.369005053Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.370582ms 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,256] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:14:54.762+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: 3CcxO9QMSqWFRVbl82UfdQ 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.373170884Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,256] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:14:54.825+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.380244575Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=7.073231ms 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,257] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.826+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Cluster ID: 3CcxO9QMSqWFRVbl82UfdQ 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.385922969Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 23:16:52 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 23:16:52 kafka | [2024-04-18 23:14:55,264] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-pap | [2024-04-18T23:14:54.883+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.3920841Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.160891ms 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,265] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-pap | [2024-04-18T23:14:54.883+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.397387313Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 23:16:52 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 23:16:52 kafka | [2024-04-18 23:14:55,265] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:14:54.889+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.402334157Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.951334ms 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,265] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:14:54.976+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.405591897Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,265] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:54.996+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.411341295Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=5.747768ms 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,274] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-pap | [2024-04-18T23:14:55.082+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.422441079Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 23:16:52 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 23:16:52 kafka | [2024-04-18 23:14:55,275] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-pap | [2024-04-18T23:14:55.102+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.422577076Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=147.418µs 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,275] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:14:55.195+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.427596784Z level=info msg="Executing migration" id=create_alert_configuration_table 23:16:52 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 23:16:52 kafka | [2024-04-18 23:14:55,276] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:14:55.210+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.428478433Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=882.259µs 23:16:52 policy-db-migrator | JOIN pdpstatistics b 23:16:52 kafka | [2024-04-18 23:14:55,276] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:55.301+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.433603676Z level=info msg="Executing migration" id="Add column default in alert_configuration" 23:16:52 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 23:16:52 kafka | [2024-04-18 23:14:55,284] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-pap | [2024-04-18T23:14:55.325+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.439081929Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=5.478533ms 23:16:52 policy-db-migrator | SET a.id = b.id 23:16:52 kafka | [2024-04-18 23:14:55,284] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-pap | [2024-04-18T23:14:55.419+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.443412309Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,284] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:14:55.433+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.443463032Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=51.283µs 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,285] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:14:55.526+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.449931229Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,285] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:55.538+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.46061323Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=10.684411ms 23:16:52 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 23:16:52 kafka | [2024-04-18 23:14:55,291] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-pap | [2024-04-18T23:14:55.633+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.466344067Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:14:55.652+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:52 kafka | [2024-04-18 23:14:55,292] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.467081998Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=738.351µs 23:16:52 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 23:16:52 policy-pap | [2024-04-18T23:14:55.746+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:52 kafka | [2024-04-18 23:14:55,292] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.47037044Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:14:55.756+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:52 kafka | [2024-04-18 23:14:55,292] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.476583413Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.212213ms 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:14:55.764+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:52 kafka | [2024-04-18 23:14:55,293] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.479683135Z level=info msg="Executing migration" id=create_ngalert_configuration_table 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:14:55.771+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:52 kafka | [2024-04-18 23:14:55,302] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.480448817Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=763.192µs 23:16:52 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 23:16:52 policy-pap | [2024-04-18T23:14:55.800+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-ce004af5-7fb1-465b-9dca-4f86ddcfcd1b 23:16:52 kafka | [2024-04-18 23:14:55,303] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.486593217Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:14:55.800+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:52 kafka | [2024-04-18 23:14:55,303] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.488021296Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.433669ms 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 23:16:52 policy-pap | [2024-04-18T23:14:55.800+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:52 kafka | [2024-04-18 23:14:55,303] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.492284082Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:14:55.855+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:52 kafka | [2024-04-18 23:14:55,303] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.499342292Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=7.05777ms 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:14:55.857+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] (Re-)joining group 23:16:52 kafka | [2024-04-18 23:14:55,310] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.506728511Z level=info msg="Executing migration" id="create provenance_type table" 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:14:55.862+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Request joining group due to: need to re-join with the given member-id: consumer-deefd98f-1600-442c-a15a-d2ceba267151-3-2efa9603-1d5c-4957-829c-f970e337f7f3 23:16:52 kafka | [2024-04-18 23:14:55,311] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.507498623Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=770.492µs 23:16:52 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 23:16:52 policy-pap | [2024-04-18T23:14:55.862+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:52 kafka | [2024-04-18 23:14:55,311] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.519529959Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 23:16:52 policy-pap | [2024-04-18T23:14:55.862+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] (Re-)joining group 23:16:52 kafka | [2024-04-18 23:14:55,311] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.520965748Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.436679ms 23:16:52 policy-pap | [2024-04-18T23:14:58.827+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-ce004af5-7fb1-465b-9dca-4f86ddcfcd1b', protocol='range'} 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.52769149Z level=info msg="Executing migration" id="create alert_image table" 23:16:52 kafka | [2024-04-18 23:14:55,311] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:14:58.833+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-ce004af5-7fb1-465b-9dca-4f86ddcfcd1b=Assignment(partitions=[policy-pdp-pap-0])} 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.528920068Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.234808ms 23:16:52 kafka | [2024-04-18 23:14:55,319] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-pap | [2024-04-18T23:14:58.872+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Successfully joined group with generation Generation{generationId=1, memberId='consumer-deefd98f-1600-442c-a15a-d2ceba267151-3-2efa9603-1d5c-4957-829c-f970e337f7f3', protocol='range'} 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.532595702Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 23:16:52 kafka | [2024-04-18 23:14:55,320] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-pap | [2024-04-18T23:14:58.872+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Finished assignment for group at generation 1: {consumer-deefd98f-1600-442c-a15a-d2ceba267151-3-2efa9603-1d5c-4957-829c-f970e337f7f3=Assignment(partitions=[policy-pdp-pap-0])} 23:16:52 policy-db-migrator | > upgrade 0210-sequence.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.533907104Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.312543ms 23:16:52 kafka | [2024-04-18 23:14:55,320] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:14:58.881+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-ce004af5-7fb1-465b-9dca-4f86ddcfcd1b', protocol='range'} 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.539022957Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 23:16:52 kafka | [2024-04-18 23:14:55,320] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:14:58.881+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.539153054Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=135.507µs 23:16:52 kafka | [2024-04-18 23:14:55,320] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:14:58.883+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Successfully synced group in generation Generation{generationId=1, memberId='consumer-deefd98f-1600-442c-a15a-d2ceba267151-3-2efa9603-1d5c-4957-829c-f970e337f7f3', protocol='range'} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.54251389Z level=info msg="Executing migration" id=create_alert_configuration_history_table 23:16:52 kafka | [2024-04-18 23:14:55,332] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:14:58.883+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.543648483Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.134503ms 23:16:52 kafka | [2024-04-18 23:14:55,333] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:14:58.886+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.547542138Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 23:16:52 kafka | [2024-04-18 23:14:55,333] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | > upgrade 0220-sequence.sql 23:16:52 policy-pap | [2024-04-18T23:14:58.887+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Adding newly assigned partitions: policy-pdp-pap-0 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.54847636Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=935.542µs 23:16:52 kafka | [2024-04-18 23:14:55,333] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:14:58.912+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Found no committed offset for partition policy-pdp-pap-0 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.556498584Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 23:16:52 kafka | [2024-04-18 23:14:55,333] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:52 policy-pap | [2024-04-18T23:14:58.912+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.556782299Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 23:16:52 kafka | [2024-04-18 23:14:55,339] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:14:58.936+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deefd98f-1600-442c-a15a-d2ceba267151-3, groupId=deefd98f-1600-442c-a15a-d2ceba267151] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.563803458Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 23:16:52 kafka | [2024-04-18 23:14:55,340] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:14:58.937+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.564128956Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=323.117µs 23:16:52 kafka | [2024-04-18 23:14:55,340] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:15:00.553+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.568236753Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 23:16:52 kafka | [2024-04-18 23:14:55,340] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 23:16:52 policy-pap | [2024-04-18T23:15:00.553+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.568967493Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=730.71µs 23:16:52 kafka | [2024-04-18 23:14:55,340] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:15:00.554+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.57144348Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 23:16:52 kafka | [2024-04-18 23:14:55,348] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 23:16:52 policy-pap | [2024-04-18T23:15:16.330+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.576900882Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=5.456512ms 23:16:52 kafka | [2024-04-18 23:14:55,349] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.581085584Z level=info msg="Executing migration" id="create library_element table v1" 23:16:52 kafka | [2024-04-18 23:14:55,349] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:15:16.330+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.582158553Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.07306ms 23:16:52 kafka | [2024-04-18 23:14:55,349] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a80741ec-ec1d-4c24-9792-e262aa00f81d","timestampMs":1713482116293,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup"} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.585135047Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 23:16:52 kafka | [2024-04-18 23:14:55,349] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 23:16:52 policy-pap | [2024-04-18T23:15:16.331+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.586183985Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.051018ms 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,357] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a80741ec-ec1d-4c24-9792-e262aa00f81d","timestampMs":1713482116293,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup"} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.589072345Z level=info msg="Executing migration" id="create library_element_connection table v1" 23:16:52 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 23:16:52 kafka | [2024-04-18 23:14:55,358] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-pap | [2024-04-18T23:15:16.358+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.589913262Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=840.517µs 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,358] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:15:16.467+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate starting 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.593885361Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,358] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:15:16.467+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate starting listener 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.595009344Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.122702ms 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,358] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:15:16.467+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate starting timer 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.600631495Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 23:16:52 policy-db-migrator | > upgrade 0120-toscatrigger.sql 23:16:52 kafka | [2024-04-18 23:14:55,370] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-pap | [2024-04-18T23:15:16.468+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=c25a74a6-c62e-4253-9df7-3b45bb0657f1, expireMs=1713482146468] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.602280296Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.648632ms 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,371] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-pap | [2024-04-18T23:15:16.470+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate starting enqueue 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.606323649Z level=info msg="Executing migration" id="increase max description length to 2048" 23:16:52 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 23:16:52 kafka | [2024-04-18 23:14:55,371] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:15:16.470+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=c25a74a6-c62e-4253-9df7-3b45bb0657f1, expireMs=1713482146468] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.606386283Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=64.304µs 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,371] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:15:16.470+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate started 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.609175837Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,371] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:15:16.473+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.609239331Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=63.534µs 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,379] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-pap | {"source":"pap-fa93d91d-c9fa-4126-a299-649d686bbaea","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c25a74a6-c62e-4253-9df7-3b45bb0657f1","timestampMs":1713482116438,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.61193452Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 23:16:52 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 23:16:52 kafka | [2024-04-18 23:14:55,380] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-pap | [2024-04-18T23:15:16.518+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.612221276Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=286.575µs 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,380] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 23:16:52 policy-pap | {"source":"pap-fa93d91d-c9fa-4126-a299-649d686bbaea","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c25a74a6-c62e-4253-9df7-3b45bb0657f1","timestampMs":1713482116438,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.615058403Z level=info msg="Executing migration" id="create data_keys table" 23:16:52 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 23:16:52 kafka | [2024-04-18 23:14:55,380] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:15:16.518+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.616265259Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.205667ms 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,380] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-pap | {"source":"pap-fa93d91d-c9fa-4126-a299-649d686bbaea","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c25a74a6-c62e-4253-9df7-3b45bb0657f1","timestampMs":1713482116438,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.619964594Z level=info msg="Executing migration" id="create secrets table" 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,386] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-pap | [2024-04-18T23:15:16.519+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.621956554Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.99093ms 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,387] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-pap | [2024-04-18T23:15:16.519+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.625069096Z level=info msg="Executing migration" id="rename data_keys name column to id" 23:16:52 policy-db-migrator | > upgrade 0140-toscaparameter.sql 23:16:52 kafka | [2024-04-18 23:14:55,387] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:15:16.545+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.654048269Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=28.978233ms 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,387] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"7d0c4e03-b564-45d4-9123-02a767667124","timestampMs":1713482116533,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup"} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.659730503Z level=info msg="Executing migration" id="add name column into data_keys" 23:16:52 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 23:16:52 kafka | [2024-04-18 23:14:55,387] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:15:16.545+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.664753251Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.022768ms 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,393] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-pap | [2024-04-18T23:15:16.548+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.668734031Z level=info msg="Executing migration" id="copy data_keys id column values into name" 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,394] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"7d0c4e03-b564-45d4-9123-02a767667124","timestampMs":1713482116533,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup"} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.668900331Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=166.809µs 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,394] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:15:16.553+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.671854354Z level=info msg="Executing migration" id="rename data_keys name column to label" 23:16:52 policy-db-migrator | > upgrade 0150-toscaproperty.sql 23:16:52 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c25a74a6-c62e-4253-9df7-3b45bb0657f1","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"88e33828-f496-4f68-bdce-d4632c78eedd","timestampMs":1713482116535,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.702478678Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=30.624104ms 23:16:52 kafka | [2024-04-18 23:14:55,394] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:15:16.565+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate stopping 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.705562758Z level=info msg="Executing migration" id="rename data_keys id column back to name" 23:16:52 kafka | [2024-04-18 23:14:55,394] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 23:16:52 kafka | [2024-04-18 23:14:55,402] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-pap | [2024-04-18T23:15:16.565+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate stopping enqueue 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.735506655Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=29.942396ms 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,403] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-pap | [2024-04-18T23:15:16.565+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate stopping timer 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.739822553Z level=info msg="Executing migration" id="create kv_store table v1" 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,403] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.740443968Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=620.875µs 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:15:16.565+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=c25a74a6-c62e-4253-9df7-3b45bb0657f1, expireMs=1713482146468] 23:16:52 kafka | [2024-04-18 23:14:55,403] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.745476216Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 23:16:52 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 23:16:52 policy-pap | [2024-04-18T23:15:16.566+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate stopping listener 23:16:52 kafka | [2024-04-18 23:14:55,403] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.747097826Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.621239ms 23:16:52 policy-pap | [2024-04-18T23:15:16.566+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate stopped 23:16:52 kafka | [2024-04-18 23:14:55,416] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.754175007Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 23:16:52 policy-pap | [2024-04-18T23:15:16.568+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate successful 23:16:52 kafka | [2024-04-18 23:14:55,425] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.754480214Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=305.797µs 23:16:52 kafka | [2024-04-18 23:14:55,426] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:15:16.568+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 start publishing next request 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.758274784Z level=info msg="Executing migration" id="create permission table" 23:16:52 kafka | [2024-04-18 23:14:55,426] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 23:16:52 policy-pap | [2024-04-18T23:15:16.568+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpStateChange starting 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.759740475Z level=info msg="Migration successfully executed" id="create permission table" duration=1.465011ms 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:15:16.568+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpStateChange starting listener 23:16:52 kafka | [2024-04-18 23:14:55,427] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:15:16.568+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpStateChange starting timer 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.765718806Z level=info msg="Executing migration" id="add unique index permission.role_id" 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,442] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-pap | [2024-04-18T23:15:16.568+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=35f314d5-4e39-45ee-b5dd-8c7ab9415862, expireMs=1713482146568] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.766688319Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=969.464µs 23:16:52 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 23:16:52 kafka | [2024-04-18 23:14:55,442] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-pap | [2024-04-18T23:15:16.568+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpStateChange starting enqueue 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.774693562Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,443] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:15:16.568+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpStateChange started 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.77628309Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.589568ms 23:16:52 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 23:16:52 policy-pap | [2024-04-18T23:15:16.569+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 29999ms Timer [name=35f314d5-4e39-45ee-b5dd-8c7ab9415862, expireMs=1713482146568] 23:16:52 policy-db-migrator | -------------- 23:16:52 kafka | [2024-04-18 23:14:55,443] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.779642976Z level=info msg="Executing migration" id="create role table" 23:16:52 policy-pap | [2024-04-18T23:15:16.569+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:52 kafka | [2024-04-18 23:14:55,443] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.780467241Z level=info msg="Migration successfully executed" id="create role table" duration=824.255µs 23:16:52 policy-db-migrator | 23:16:52 policy-pap | {"source":"pap-fa93d91d-c9fa-4126-a299-649d686bbaea","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"35f314d5-4e39-45ee-b5dd-8c7ab9415862","timestampMs":1713482116438,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:52 kafka | [2024-04-18 23:14:55,456] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.785829788Z level=info msg="Executing migration" id="add column display_name" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:15:16.575+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:52 kafka | [2024-04-18 23:14:55,457] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.795771538Z level=info msg="Migration successfully executed" id="add column display_name" duration=9.94168ms 23:16:52 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 23:16:52 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c25a74a6-c62e-4253-9df7-3b45bb0657f1","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"88e33828-f496-4f68-bdce-d4632c78eedd","timestampMs":1713482116535,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:52 kafka | [2024-04-18 23:14:55,457] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.798905921Z level=info msg="Executing migration" id="add column group_name" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:15:16.579+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id c25a74a6-c62e-4253-9df7-3b45bb0657f1 23:16:52 kafka | [2024-04-18 23:14:55,457] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.804685671Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.77834ms 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:15:16.584+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:52 kafka | [2024-04-18 23:14:55,457] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.810211856Z level=info msg="Executing migration" id="add index role.org_id" 23:16:52 policy-db-migrator | 23:16:52 policy-pap | {"source":"pap-fa93d91d-c9fa-4126-a299-649d686bbaea","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"35f314d5-4e39-45ee-b5dd-8c7ab9415862","timestampMs":1713482116438,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:52 kafka | [2024-04-18 23:14:55,471] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.811168919Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=956.843µs 23:16:52 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 23:16:52 policy-pap | [2024-04-18T23:15:16.584+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 23:16:52 kafka | [2024-04-18 23:14:55,472] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.816628941Z level=info msg="Executing migration" id="add unique index role_org_id_name" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:15:16.592+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:52 kafka | [2024-04-18 23:14:55,472] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.817817427Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.190186ms 23:16:52 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:52 policy-pap | {"source":"pap-fa93d91d-c9fa-4126-a299-649d686bbaea","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"35f314d5-4e39-45ee-b5dd-8c7ab9415862","timestampMs":1713482116438,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:52 kafka | [2024-04-18 23:14:55,472] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.821048056Z level=info msg="Executing migration" id="add index role_org_id_uid" 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:15:16.592+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 23:16:52 kafka | [2024-04-18 23:14:55,472] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:15:16.597+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:52 kafka | [2024-04-18 23:14:55,483] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.822706478Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.654601ms 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"35f314d5-4e39-45ee-b5dd-8c7ab9415862","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"aff58f06-2f5f-434f-a796-4eda85435870","timestampMs":1713482116589,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:52 kafka | [2024-04-18 23:14:55,484] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.827534145Z level=info msg="Executing migration" id="create team role table" 23:16:52 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 23:16:52 policy-pap | [2024-04-18T23:15:16.598+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 35f314d5-4e39-45ee-b5dd-8c7ab9415862 23:16:52 kafka | [2024-04-18 23:14:55,484] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.828780404Z level=info msg="Migration successfully executed" id="create team role table" duration=1.246429ms 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:15:16.600+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:52 kafka | [2024-04-18 23:14:55,484] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.836343282Z level=info msg="Executing migration" id="add index team_role.org_id" 23:16:52 policy-db-migrator | 23:16:52 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"35f314d5-4e39-45ee-b5dd-8c7ab9415862","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"aff58f06-2f5f-434f-a796-4eda85435870","timestampMs":1713482116589,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:52 kafka | [2024-04-18 23:14:55,485] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.837507696Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.169055ms 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpStateChange stopping 23:16:52 kafka | [2024-04-18 23:14:55,495] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.841832175Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 23:16:52 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.842712284Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=880.469µs 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpStateChange stopping enqueue 23:16:52 kafka | [2024-04-18 23:14:55,496] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.849392784Z level=info msg="Executing migration" id="add index team_role.team_id" 23:16:52 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 23:16:52 policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpStateChange stopping timer 23:16:52 kafka | [2024-04-18 23:14:55,496] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.850211959Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=819.696µs 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=35f314d5-4e39-45ee-b5dd-8c7ab9415862, expireMs=1713482146568] 23:16:52 kafka | [2024-04-18 23:14:55,497] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.857406607Z level=info msg="Executing migration" id="create user role table" 23:16:52 policy-db-migrator | 23:16:52 kafka | [2024-04-18 23:14:55,497] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.858777613Z level=info msg="Migration successfully executed" id="create user role table" duration=1.371705ms 23:16:52 policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpStateChange stopping listener 23:16:52 kafka | [2024-04-18 23:14:55,506] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpStateChange stopped 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.862439255Z level=info msg="Executing migration" id="add index user_role.org_id" 23:16:52 kafka | [2024-04-18 23:14:55,507] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-db-migrator | > upgrade 0100-upgrade.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.864028253Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.588738ms 23:16:52 policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpStateChange successful 23:16:52 kafka | [2024-04-18 23:14:55,507] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.867687505Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 23:16:52 policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 start publishing next request 23:16:52 kafka | [2024-04-18 23:14:55,507] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | select 'upgrade to 1100 completed' as msg 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.869306705Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.61794ms 23:16:52 policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate starting 23:16:52 kafka | [2024-04-18 23:14:55,507] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.876296082Z level=info msg="Executing migration" id="add index user_role.user_id" 23:16:52 policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate starting listener 23:16:52 kafka | [2024-04-18 23:14:55,515] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.877379882Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.08465ms 23:16:52 policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate starting timer 23:16:52 kafka | [2024-04-18 23:14:55,516] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 23:16:52 policy-db-migrator | msg 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.880721906Z level=info msg="Executing migration" id="create builtin role table" 23:16:52 policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=46108e0e-87ab-4235-a93e-b62b8e791b82, expireMs=1713482146601] 23:16:52 kafka | [2024-04-18 23:14:55,516] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | upgrade to 1100 completed 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.882077191Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.355205ms 23:16:52 policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate starting enqueue 23:16:52 kafka | [2024-04-18 23:14:55,516] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.885822939Z level=info msg="Executing migration" id="add index builtin_role.role_id" 23:16:52 kafka | [2024-04-18 23:14:55,517] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(idJrMUf2Q6auoCOWuYUphA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:15:16.601+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate started 23:16:52 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.887599917Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.777399ms 23:16:52 policy-pap | [2024-04-18T23:15:16.602+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:52 kafka | [2024-04-18 23:14:55,527] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.895101302Z level=info msg="Executing migration" id="add index builtin_role.name" 23:16:52 policy-pap | {"source":"pap-fa93d91d-c9fa-4126-a299-649d686bbaea","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"46108e0e-87ab-4235-a93e-b62b8e791b82","timestampMs":1713482116585,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:52 kafka | [2024-04-18 23:14:55,528] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.897131934Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=2.034963ms 23:16:52 policy-pap | [2024-04-18T23:15:16.610+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:52 kafka | [2024-04-18 23:14:55,529] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.901709597Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 23:16:52 policy-pap | {"source":"pap-fa93d91d-c9fa-4126-a299-649d686bbaea","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"46108e0e-87ab-4235-a93e-b62b8e791b82","timestampMs":1713482116585,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:52 kafka | [2024-04-18 23:14:55,529] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.910708455Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.997808ms 23:16:52 policy-pap | [2024-04-18T23:15:16.610+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:52 kafka | [2024-04-18 23:14:55,530] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.9154958Z level=info msg="Executing migration" id="add index builtin_role.org_id" 23:16:52 policy-pap | [2024-04-18T23:15:16.613+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:52 kafka | [2024-04-18 23:14:55,540] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.91802477Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=2.528709ms 23:16:52 policy-pap | {"source":"pap-fa93d91d-c9fa-4126-a299-649d686bbaea","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"46108e0e-87ab-4235-a93e-b62b8e791b82","timestampMs":1713482116585,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:52 kafka | [2024-04-18 23:14:55,542] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.922667326Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 23:16:52 policy-pap | [2024-04-18T23:15:16.613+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:52 kafka | [2024-04-18 23:14:55,542] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.924693178Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=2.025532ms 23:16:52 policy-pap | [2024-04-18T23:15:16.623+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:52 kafka | [2024-04-18 23:14:55,542] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.928173121Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 23:16:52 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"46108e0e-87ab-4235-a93e-b62b8e791b82","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"27a5c9bf-e2d6-4122-aa3f-c57320e3422f","timestampMs":1713482116613,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:52 kafka | [2024-04-18 23:14:55,542] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:15:16.623+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate stopping 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.929301353Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.130882ms 23:16:52 kafka | [2024-04-18 23:14:55,551] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:15:16.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate stopping enqueue 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.93484351Z level=info msg="Executing migration" id="add unique index role.uid" 23:16:52 kafka | [2024-04-18 23:14:55,552] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:52 policy-pap | [2024-04-18T23:15:16.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate stopping timer 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.936240777Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.395957ms 23:16:52 kafka | [2024-04-18 23:14:55,552] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:15:16.624+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=46108e0e-87ab-4235-a93e-b62b8e791b82, expireMs=1713482146601] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.942246229Z level=info msg="Executing migration" id="create seed assignment table" 23:16:52 kafka | [2024-04-18 23:14:55,552] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:15:16.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate stopping listener 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.943559392Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.309933ms 23:16:52 kafka | [2024-04-18 23:14:55,553] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:15:16.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate stopped 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.947156511Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 23:16:52 kafka | [2024-04-18 23:14:55,570] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-db-migrator | > upgrade 0120-audit_sequence.sql 23:16:52 policy-pap | [2024-04-18T23:15:16.628+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.948292674Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.136093ms 23:16:52 kafka | [2024-04-18 23:14:55,571] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"46108e0e-87ab-4235-a93e-b62b8e791b82","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"27a5c9bf-e2d6-4122-aa3f-c57320e3422f","timestampMs":1713482116613,"name":"apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.952427092Z level=info msg="Executing migration" id="add column hidden to role table" 23:16:52 kafka | [2024-04-18 23:14:55,571] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:52 policy-pap | [2024-04-18T23:15:16.629+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 46108e0e-87ab-4235-a93e-b62b8e791b82 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.96069712Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.269928ms 23:16:52 kafka | [2024-04-18 23:14:55,571] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:15:16.629+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 PdpUpdate successful 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.964691761Z level=info msg="Executing migration" id="permission kind migration" 23:16:52 kafka | [2024-04-18 23:14:55,571] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:15:16.629+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-d14c33eb-7783-4d4b-98e1-4d65e14a94e4 has no more requests 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.97117804Z level=info msg="Migration successfully executed" id="permission kind migration" duration=6.484968ms 23:16:52 kafka | [2024-04-18 23:14:55,580] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:15:21.003+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.978107763Z level=info msg="Executing migration" id="permission attribute migration" 23:16:52 kafka | [2024-04-18 23:14:55,581] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 23:16:52 policy-pap | [2024-04-18T23:15:21.055+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.990850398Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=12.743335ms 23:16:52 kafka | [2024-04-18 23:14:55,581] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:15:21.066+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.994135869Z level=info msg="Executing migration" id="permission identifier migration" 23:16:52 kafka | [2024-04-18 23:14:55,581] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:15:21.068+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:24.999675476Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.539307ms 23:16:52 kafka | [2024-04-18 23:14:55,581] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:15:21.512+00:00|INFO|SessionData|http-nio-6969-exec-6] unknown group testGroup 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.002755026Z level=info msg="Executing migration" id="add permission identifier index" 23:16:52 kafka | [2024-04-18 23:14:55,591] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 23:16:52 policy-pap | [2024-04-18T23:15:22.000+00:00|INFO|SessionData|http-nio-6969-exec-6] create cached group testGroup 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.003880108Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.124372ms 23:16:52 kafka | [2024-04-18 23:14:55,591] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:15:22.001+00:00|INFO|SessionData|http-nio-6969-exec-6] creating DB group testGroup 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.007986634Z level=info msg="Executing migration" id="add permission action scope role_id index" 23:16:52 kafka | [2024-04-18 23:14:55,591] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:52 policy-pap | [2024-04-18T23:15:22.529+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group testGroup 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.00918747Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.200206ms 23:16:52 kafka | [2024-04-18 23:14:55,591] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:15:22.765+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy onap.restart.tca 1.0.0 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.012523843Z level=info msg="Executing migration" id="remove permission role_id action scope index" 23:16:52 kafka | [2024-04-18 23:14:55,591] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:15:22.865+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.013678936Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.152883ms 23:16:52 kafka | [2024-04-18 23:14:55,599] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:15:22.866+00:00|INFO|SessionData|http-nio-6969-exec-10] update cached group testGroup 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.022198293Z level=info msg="Executing migration" id="create query_history table v1" 23:16:52 kafka | [2024-04-18 23:14:55,600] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:52 policy-pap | [2024-04-18T23:15:22.866+00:00|INFO|SessionData|http-nio-6969-exec-10] updating DB group testGroup 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.02322895Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.030527ms 23:16:52 kafka | [2024-04-18 23:14:55,600] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:15:22.880+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-04-18T23:15:22Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-04-18T23:15:22Z, user=policyadmin)] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.031090431Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 23:16:52 kafka | [2024-04-18 23:14:55,600] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:15:23.553+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group testGroup 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.033051608Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.960517ms 23:16:52 kafka | [2024-04-18 23:14:55,600] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-db-migrator | TRUNCATE TABLE sequence 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.036458655Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 23:16:52 kafka | [2024-04-18 23:14:55,608] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-pap | [2024-04-18T23:15:23.554+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.036643375Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=187.34µs 23:16:52 kafka | [2024-04-18 23:14:55,609] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-pap | [2024-04-18T23:15:23.554+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.restart.tca 1.0.0 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.041885973Z level=info msg="Executing migration" id="rbac disabled migrator" 23:16:52 kafka | [2024-04-18 23:14:55,609] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:15:23.554+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group testGroup 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.041959567Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=73.564µs 23:16:52 kafka | [2024-04-18 23:14:55,609] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:15:23.555+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group testGroup 23:16:52 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.046100744Z level=info msg="Executing migration" id="teams permissions migration" 23:16:52 kafka | [2024-04-18 23:14:55,610] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:15:23.565+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-18T23:15:23Z, user=policyadmin)] 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.046686326Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=584.762µs 23:16:52 kafka | [2024-04-18 23:14:55,616] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-pap | [2024-04-18T23:15:23.880+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup 23:16:52 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.050745299Z level=info msg="Executing migration" id="dashboard permissions" 23:16:52 kafka | [2024-04-18 23:14:55,616] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-pap | [2024-04-18T23:15:23.880+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.051465658Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=721.59µs 23:16:52 kafka | [2024-04-18 23:14:55,616] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:15:23.880+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.054711396Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 23:16:52 kafka | [2024-04-18 23:14:55,616] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-pap | [2024-04-18T23:15:23.880+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.055410434Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=698.938µs 23:16:52 kafka | [2024-04-18 23:14:55,616] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-pap | [2024-04-18T23:15:23.880+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 23:16:52 policy-db-migrator | DROP TABLE pdpstatistics 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.060011157Z level=info msg="Executing migration" id="drop managed folder create actions" 23:16:52 kafka | [2024-04-18 23:14:55,625] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-pap | [2024-04-18T23:15:23.880+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.060296112Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=285.036µs 23:16:52 kafka | [2024-04-18 23:14:55,626] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:15:23.895+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-18T23:15:23Z, user=policyadmin)] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.063613624Z level=info msg="Executing migration" id="alerting notification permissions" 23:16:52 kafka | [2024-04-18 23:14:55,626] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | 23:16:52 policy-pap | [2024-04-18T23:15:44.463+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group testGroup 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.064199166Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=585.692µs 23:16:52 kafka | [2024-04-18 23:14:55,627] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:52 policy-pap | [2024-04-18T23:15:44.465+00:00|INFO|SessionData|http-nio-6969-exec-3] deleting DB group testGroup 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.073090184Z level=info msg="Executing migration" id="create query_history_star table v1" 23:16:52 kafka | [2024-04-18 23:14:55,627] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-db-migrator | -------------- 23:16:52 policy-pap | [2024-04-18T23:15:46.468+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=c25a74a6-c62e-4253-9df7-3b45bb0657f1, expireMs=1713482146468] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.073944801Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=855.477µs 23:16:52 kafka | [2024-04-18 23:14:55,636] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 23:16:52 policy-pap | [2024-04-18T23:15:46.568+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=35f314d5-4e39-45ee-b5dd-8c7ab9415862, expireMs=1713482146568] 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.082918023Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 23:16:52 kafka | [2024-04-18 23:14:55,641] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.084135439Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.217036ms 23:16:52 kafka | [2024-04-18 23:14:55,641] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.088731211Z level=info msg="Executing migration" id="add column org_id in query_history_star" 23:16:52 kafka | [2024-04-18 23:14:55,641] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.094639785Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=5.908954ms 23:16:52 kafka | [2024-04-18 23:14:55,641] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.097985159Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 23:16:52 kafka | [2024-04-18 23:14:55,654] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.098088044Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=103.195µs 23:16:52 kafka | [2024-04-18 23:14:55,655] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-db-migrator | DROP TABLE statistics_sequence 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.101447209Z level=info msg="Executing migration" id="create correlation table v1" 23:16:52 kafka | [2024-04-18 23:14:55,655] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | -------------- 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.102552389Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.10441ms 23:16:52 kafka | [2024-04-18 23:14:55,655] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 policy-db-migrator | 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.109295349Z level=info msg="Executing migration" id="add index correlations.uid" 23:16:52 kafka | [2024-04-18 23:14:55,656] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 policy-db-migrator | policyadmin: OK: upgrade (1300) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.110628572Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.332963ms 23:16:52 kafka | [2024-04-18 23:14:55,666] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 policy-db-migrator | name version 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.113772445Z level=info msg="Executing migration" id="add index correlations.source_uid" 23:16:52 kafka | [2024-04-18 23:14:55,667] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 policy-db-migrator | policyadmin 1300 23:16:52 kafka | [2024-04-18 23:14:55,667] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.115048925Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.27612ms 23:16:52 policy-db-migrator | ID script operation from_version to_version tag success atTime 23:16:52 kafka | [2024-04-18 23:14:55,667] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.11861712Z level=info msg="Executing migration" id="add correlation config column" 23:16:52 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:23 23:16:52 kafka | [2024-04-18 23:14:55,667] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.127789233Z level=info msg="Migration successfully executed" id="add correlation config column" duration=9.171613ms 23:16:52 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:23 23:16:52 kafka | [2024-04-18 23:14:55,681] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.132217726Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 23:16:52 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:23 23:16:52 kafka | [2024-04-18 23:14:55,681] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.133445003Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.226927ms 23:16:52 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:23 23:16:52 kafka | [2024-04-18 23:14:55,681] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.143629202Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 23:16:52 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 23:16:52 kafka | [2024-04-18 23:14:55,681] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.145753858Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=2.129867ms 23:16:52 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 23:16:52 kafka | [2024-04-18 23:14:55,681] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.151768508Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 23:16:52 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 23:16:52 kafka | [2024-04-18 23:14:55,693] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.179302198Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=27.532039ms 23:16:52 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 23:16:52 kafka | [2024-04-18 23:14:55,694] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.183800084Z level=info msg="Executing migration" id="create correlation v2" 23:16:52 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 23:16:52 kafka | [2024-04-18 23:14:55,694] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.184740826Z level=info msg="Migration successfully executed" id="create correlation v2" duration=939.592µs 23:16:52 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 23:16:52 kafka | [2024-04-18 23:14:55,694] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.193160218Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 23:16:52 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 23:16:52 kafka | [2024-04-18 23:14:55,694] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.194437718Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.277281ms 23:16:52 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 23:16:52 kafka | [2024-04-18 23:14:55,703] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.197791652Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 23:16:52 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 23:16:52 kafka | [2024-04-18 23:14:55,704] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.199825663Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=2.033632ms 23:16:52 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 23:16:52 kafka | [2024-04-18 23:14:55,704] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.20470113Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 23:16:52 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 23:16:52 kafka | [2024-04-18 23:14:55,704] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.205943329Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.240578ms 23:16:52 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 23:16:52 kafka | [2024-04-18 23:14:55,704] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(Ri5cls-BQlq9q6kFJBomtA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.209471482Z level=info msg="Executing migration" id="copy correlation v1 to v2" 23:16:52 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.209862193Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=390.131µs 23:16:52 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.213318503Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 23:16:52 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.214227193Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=908.39µs 23:16:52 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.218516168Z level=info msg="Executing migration" id="add provisioning column" 23:16:52 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.226988403Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.469244ms 23:16:52 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.234466823Z level=info msg="Executing migration" id="create entity_events table" 23:16:52 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.23570018Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.228438ms 23:16:52 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.239375242Z level=info msg="Executing migration" id="create dashboard public config v1" 23:16:52 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.240537445Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.161343ms 23:16:52 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.24499915Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:52 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:24 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.2455373Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:52 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.248808819Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:52 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.249397081Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:52 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.253733249Z level=info msg="Executing migration" id="Drop old dashboard public config table" 23:16:52 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.255220551Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.488581ms 23:16:52 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.259916758Z level=info msg="Executing migration" id="recreate dashboard public config v1" 23:16:52 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.261293124Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.375825ms 23:16:52 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.264922143Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 23:16:52 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.266322849Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.400367ms 23:16:52 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.274080815Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:52 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.275358925Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.27787ms 23:16:52 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.282460924Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 23:16:52 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.284378199Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.913935ms 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:52 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.288238731Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:52 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.289387844Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.149033ms 23:16:52 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.294097332Z level=info msg="Executing migration" id="Drop public config table" 23:16:52 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.295381533Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.28181ms 23:16:52 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.299515879Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 23:16:52 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.301620925Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=2.106226ms 23:16:52 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.305116577Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 23:16:52 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.306441419Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.322873ms 23:16:52 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.310785727Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:52 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:25 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:52 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.312503822Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.715474ms 23:16:52 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.317099314Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 23:16:52 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:52 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.319100683Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=2.00116ms 23:16:52 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.325010977Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 23:16:52 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.350270292Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=25.260675ms 23:16:52 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.353725952Z level=info msg="Executing migration" id="add annotations_enabled column" 23:16:52 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.362125343Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.39744ms 23:16:52 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.367603013Z level=info msg="Executing migration" id="add time_selection_enabled column" 23:16:52 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.374218486Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.613452ms 23:16:52 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.377706657Z level=info msg="Executing migration" id="delete orphaned public dashboards" 23:16:52 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.378032225Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=324.798µs 23:16:52 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.383529216Z level=info msg="Executing migration" id="add share column" 23:16:52 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.395368944Z level=info msg="Migration successfully executed" id="add share column" duration=11.839218ms 23:16:52 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.404707376Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 23:16:52 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.405009763Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=301.987µs 23:16:52 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,712] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,717] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.411398143Z level=info msg="Executing migration" id="create file table" 23:16:52 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,723] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.41260994Z level=info msg="Migration successfully executed" id="create file table" duration=1.209126ms 23:16:52 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,724] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.417233493Z level=info msg="Executing migration" id="file table idx: path natural pk" 23:16:52 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,725] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.41881302Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.576217ms 23:16:52 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,725] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.423678517Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 23:16:52 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,725] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.426135111Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=2.456155ms 23:16:52 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,725] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.429921719Z level=info msg="Executing migration" id="create file_meta table" 23:16:52 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,725] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.431163987Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.240968ms 23:16:52 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,725] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.435444152Z level=info msg="Executing migration" id="file table idx: path key" 23:16:52 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:26 23:16:52 kafka | [2024-04-18 23:14:55,725] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.436975746Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.531444ms 23:16:52 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 23:16:52 kafka | [2024-04-18 23:14:55,725] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.441956589Z level=info msg="Executing migration" id="set path collation in file table" 23:16:52 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 23:16:52 kafka | [2024-04-18 23:14:55,725] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.442005322Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=49.352µs 23:16:52 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 23:16:52 kafka | [2024-04-18 23:14:55,725] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.445035858Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 23:16:52 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.445175485Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=141.287µs 23:16:52 kafka | [2024-04-18 23:14:55,725] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.450268695Z level=info msg="Executing migration" id="managed permissions migration" 23:16:52 kafka | [2024-04-18 23:14:55,725] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.450773062Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=504.247µs 23:16:52 kafka | [2024-04-18 23:14:55,725] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.458708848Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 23:16:52 kafka | [2024-04-18 23:14:55,725] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.458958431Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=249.784µs 23:16:52 kafka | [2024-04-18 23:14:55,725] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.461346202Z level=info msg="Executing migration" id="RBAC action name migrator" 23:16:52 kafka | [2024-04-18 23:14:55,725] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.46240823Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.061828ms 23:16:52 kafka | [2024-04-18 23:14:55,725] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.465837848Z level=info msg="Executing migration" id="Add UID column to playlist" 23:16:52 kafka | [2024-04-18 23:14:55,725] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.472134084Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=6.295746ms 23:16:52 kafka | [2024-04-18 23:14:55,725] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.475384742Z level=info msg="Executing migration" id="Update uid column values in playlist" 23:16:52 kafka | [2024-04-18 23:14:55,725] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.47552593Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=140.628µs 23:16:52 kafka | [2024-04-18 23:14:55,727] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 4 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.480392867Z level=info msg="Executing migration" id="Add index for uid in playlist" 23:16:52 kafka | [2024-04-18 23:14:55,729] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.481422403Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.028947ms 23:16:52 kafka | [2024-04-18 23:14:55,729] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.484614208Z level=info msg="Executing migration" id="update group index for alert rules" 23:16:52 kafka | [2024-04-18 23:14:55,729] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.485035191Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=421.013µs 23:16:52 kafka | [2024-04-18 23:14:55,729] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.490922434Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 23:16:52 kafka | [2024-04-18 23:14:55,729] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:27 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.491174648Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=251.904µs 23:16:52 kafka | [2024-04-18 23:14:55,729] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1804242314230800u 1 2024-04-18 23:14:28 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.496953615Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 23:16:52 kafka | [2024-04-18 23:14:55,729] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.497404529Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=450.735µs 23:16:52 kafka | [2024-04-18 23:14:55,729] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.501342045Z level=info msg="Executing migration" id="add action column to seed_assignment" 23:16:52 kafka | [2024-04-18 23:14:55,729] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.508288076Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=6.944421ms 23:16:52 kafka | [2024-04-18 23:14:55,729] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.511700663Z level=info msg="Executing migration" id="add scope column to seed_assignment" 23:16:52 kafka | [2024-04-18 23:14:55,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.52111704Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=9.415686ms 23:16:52 kafka | [2024-04-18 23:14:55,730] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 5 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.524922868Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 23:16:52 kafka | [2024-04-18 23:14:55,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 23:16:52 kafka | [2024-04-18 23:14:55,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.526123454Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.200516ms 23:16:52 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 23:16:52 kafka | [2024-04-18 23:14:55,730] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.532917507Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 23:16:52 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 23:16:52 kafka | [2024-04-18 23:14:55,730] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.60756819Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=74.649474ms 23:16:52 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 23:16:52 kafka | [2024-04-18 23:14:55,730] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.611385729Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 23:16:52 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 23:16:52 kafka | [2024-04-18 23:14:55,730] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.612253537Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=867.648µs 23:16:52 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 23:16:52 kafka | [2024-04-18 23:14:55,730] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.618831578Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 23:16:52 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 1804242314230900u 1 2024-04-18 23:14:28 23:16:52 kafka | [2024-04-18 23:14:55,730] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.620673169Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.841381ms 23:16:52 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 1804242314231000u 1 2024-04-18 23:14:28 23:16:52 kafka | [2024-04-18 23:14:55,730] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.629069799Z level=info msg="Executing migration" id="add primary key to seed_assigment" 23:16:52 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 1804242314231000u 1 2024-04-18 23:14:28 23:16:52 kafka | [2024-04-18 23:14:55,730] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.657234303Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=28.165524ms 23:16:52 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 1804242314231000u 1 2024-04-18 23:14:28 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.661825745Z level=info msg="Executing migration" id="add origin column to seed_assignment" 23:16:52 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 1804242314231000u 1 2024-04-18 23:14:28 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.670756655Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=8.93026ms 23:16:52 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 1804242314231000u 1 2024-04-18 23:14:28 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.675645023Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 23:16:52 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 1804242314231000u 1 2024-04-18 23:14:28 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.67595934Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=314.107µs 23:16:52 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 1804242314231000u 1 2024-04-18 23:14:28 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.679293293Z level=info msg="Executing migration" id="prevent seeding OnCall access" 23:16:52 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 1804242314231000u 1 2024-04-18 23:14:28 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.679504474Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=210.791µs 23:16:52 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 1804242314231000u 1 2024-04-18 23:14:29 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.683196047Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 23:16:52 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 1804242314231100u 1 2024-04-18 23:14:29 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.683741827Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=545.169µs 23:16:52 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 1804242314231200u 1 2024-04-18 23:14:29 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.688998195Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 23:16:52 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 1804242314231200u 1 2024-04-18 23:14:29 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.689345604Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=347.139µs 23:16:52 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 1804242314231200u 1 2024-04-18 23:14:29 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.695251708Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 23:16:52 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 1804242314231200u 1 2024-04-18 23:14:29 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.695679871Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=428.683µs 23:16:52 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 1804242314231300u 1 2024-04-18 23:14:29 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 2 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.699320251Z level=info msg="Executing migration" id="create folder table" 23:16:52 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 1804242314231300u 1 2024-04-18 23:14:29 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.700131565Z level=info msg="Migration successfully executed" id="create folder table" duration=811.614µs 23:16:52 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 1804242314231300u 1 2024-04-18 23:14:29 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.705373713Z level=info msg="Executing migration" id="Add index for parent_uid" 23:16:52 policy-db-migrator | policyadmin: OK @ 1300 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.70695759Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.583367ms 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.715316218Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.717956793Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=2.640455ms 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.721909Z level=info msg="Executing migration" id="Update folder title length" 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.721944852Z level=info msg="Migration successfully executed" id="Update folder title length" duration=36.672µs 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.725776702Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.727170848Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.394216ms 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.732659559Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.734066366Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.409367ms 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.737506475Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 23:16:52 kafka | [2024-04-18 23:14:55,731] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.738965935Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.46274ms 23:16:52 kafka | [2024-04-18 23:14:55,732] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 3 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.744686809Z level=info msg="Executing migration" id="Sync dashboard and folder table" 23:16:52 kafka | [2024-04-18 23:14:55,732] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.745152724Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=465.186µs 23:16:52 kafka | [2024-04-18 23:14:55,732] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.753141312Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 23:16:52 kafka | [2024-04-18 23:14:55,732] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.753501012Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=359.71µs 23:16:52 kafka | [2024-04-18 23:14:55,732] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.758035611Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 23:16:52 kafka | [2024-04-18 23:14:55,732] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.759562634Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.526344ms 23:16:52 kafka | [2024-04-18 23:14:55,732] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.763391184Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 23:16:52 kafka | [2024-04-18 23:14:55,732] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.764759259Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.369555ms 23:16:52 kafka | [2024-04-18 23:14:55,732] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.770226159Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 23:16:52 kafka | [2024-04-18 23:14:55,732] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.771623746Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.397766ms 23:16:52 kafka | [2024-04-18 23:14:55,732] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.775237034Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 23:16:52 kafka | [2024-04-18 23:14:55,732] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.776608059Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.367315ms 23:16:52 kafka | [2024-04-18 23:14:55,732] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.780194836Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 23:16:52 kafka | [2024-04-18 23:14:55,732] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.781404972Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.211077ms 23:16:52 kafka | [2024-04-18 23:14:55,732] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.785774032Z level=info msg="Executing migration" id="create anon_device table" 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.786786187Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.012676ms 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.79504665Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.797353967Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.307516ms 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.803058969Z level=info msg="Executing migration" id="add index anon_device.updated_at" 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.80435535Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.296681ms 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.809414078Z level=info msg="Executing migration" id="create signing_key table" 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.810989724Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.574766ms 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.814471905Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.81584201Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.372175ms 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.819837069Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.821268368Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.432169ms 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.825800996Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.826276332Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=476.136µs 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.829633817Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.839161719Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.528833ms 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.84301025Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.843997754Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=988.534µs 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.848424517Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.850285469Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.860302ms 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.853809722Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.855431971Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.622319ms 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.859017848Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.860621426Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.619739ms 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.865560206Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.86727281Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.711894ms 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.870992064Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.872336658Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.345324ms 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.879146421Z level=info msg="Executing migration" id="create sso_setting table" 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.880374989Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.228258ms 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.886661184Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.888511945Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.852042ms 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.892294432Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.89279372Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=499.468µs 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.896825851Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.897083775Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=254.464µs 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.90283548Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.912522472Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=9.684131ms 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.91669558Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.926622655Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.927025ms 23:16:52 kafka | [2024-04-18 23:14:55,733] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.935545224Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 23:16:52 kafka | [2024-04-18 23:14:55,734] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 3 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.935973177Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=428.803µs 23:16:52 kafka | [2024-04-18 23:14:55,734] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=migrator t=2024-04-18T23:14:25.941340212Z level=info msg="migrations completed" performed=548 skipped=0 duration=3.990618926s 23:16:52 kafka | [2024-04-18 23:14:55,734] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=sqlstore t=2024-04-18T23:14:25.952230399Z level=info msg="Created default admin" user=admin 23:16:52 kafka | [2024-04-18 23:14:55,734] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=sqlstore t=2024-04-18T23:14:25.952685934Z level=info msg="Created default organization" 23:16:52 kafka | [2024-04-18 23:14:55,734] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=secrets t=2024-04-18T23:14:25.957238644Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 23:16:52 kafka | [2024-04-18 23:14:55,734] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=plugin.store t=2024-04-18T23:14:25.978285378Z level=info msg="Loading plugins..." 23:16:52 kafka | [2024-04-18 23:14:55,734] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=local.finder t=2024-04-18T23:14:26.02190983Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 23:16:52 kafka | [2024-04-18 23:14:55,734] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=plugin.store t=2024-04-18T23:14:26.021949882Z level=info msg="Plugins loaded" count=55 duration=43.665305ms 23:16:52 kafka | [2024-04-18 23:14:55,734] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=query_data t=2024-04-18T23:14:26.030346346Z level=info msg="Query Service initialization" 23:16:52 kafka | [2024-04-18 23:14:55,735] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 2 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=live.push_http t=2024-04-18T23:14:26.03673946Z level=info msg="Live Push Gateway initialization" 23:16:52 kafka | [2024-04-18 23:14:55,735] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=ngalert.migration t=2024-04-18T23:14:26.04180481Z level=info msg=Starting 23:16:52 kafka | [2024-04-18 23:14:55,735] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=ngalert.migration t=2024-04-18T23:14:26.042329549Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 23:16:52 kafka | [2024-04-18 23:14:55,735] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=ngalert.migration orgID=1 t=2024-04-18T23:14:26.043035158Z level=info msg="Migrating alerts for organisation" 23:16:52 kafka | [2024-04-18 23:14:55,735] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=ngalert.migration orgID=1 t=2024-04-18T23:14:26.04414885Z level=info msg="Alerts found to migrate" alerts=0 23:16:52 kafka | [2024-04-18 23:14:55,735] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=ngalert.migration t=2024-04-18T23:14:26.046184532Z level=info msg="Completed alerting migration" 23:16:52 kafka | [2024-04-18 23:14:55,735] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=ngalert.state.manager t=2024-04-18T23:14:26.081512146Z level=info msg="Running in alternative execution of Error/NoData mode" 23:16:52 kafka | [2024-04-18 23:14:55,735] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=infra.usagestats.collector t=2024-04-18T23:14:26.083661365Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 23:16:52 kafka | [2024-04-18 23:14:55,735] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=provisioning.datasources t=2024-04-18T23:14:26.08574102Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 23:16:52 kafka | [2024-04-18 23:14:55,735] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=provisioning.alerting t=2024-04-18T23:14:26.098809943Z level=info msg="starting to provision alerting" 23:16:52 kafka | [2024-04-18 23:14:55,735] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=provisioning.alerting t=2024-04-18T23:14:26.098832764Z level=info msg="finished to provision alerting" 23:16:52 kafka | [2024-04-18 23:14:55,736] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=ngalert.state.manager t=2024-04-18T23:14:26.099004034Z level=info msg="Warming state cache for startup" 23:16:52 kafka | [2024-04-18 23:14:55,736] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=ngalert.multiorg.alertmanager t=2024-04-18T23:14:26.099060967Z level=info msg="Starting MultiOrg Alertmanager" 23:16:52 kafka | [2024-04-18 23:14:55,736] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=ngalert.state.manager t=2024-04-18T23:14:26.099456509Z level=info msg="State cache has been initialized" states=0 duration=451.305µs 23:16:52 kafka | [2024-04-18 23:14:55,736] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=ngalert.scheduler t=2024-04-18T23:14:26.099492841Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 23:16:52 kafka | [2024-04-18 23:14:55,736] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:52 grafana | logger=ticker t=2024-04-18T23:14:26.099542244Z level=info msg=starting first_tick=2024-04-18T23:14:30Z 23:16:52 kafka | [2024-04-18 23:14:55,737] INFO [Broker id=1] Finished LeaderAndIsr request in 716ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:16:52 grafana | logger=grafanaStorageLogger t=2024-04-18T23:14:26.10039072Z level=info msg="Storage starting" 23:16:52 grafana | logger=http.server t=2024-04-18T23:14:26.104061213Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 23:16:52 kafka | [2024-04-18 23:14:55,742] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=Ri5cls-BQlq9q6kFJBomtA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=idJrMUf2Q6auoCOWuYUphA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:52 grafana | logger=provisioning.dashboard t=2024-04-18T23:14:26.139603269Z level=info msg="starting to provision dashboards" 23:16:52 kafka | [2024-04-18 23:14:55,752] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 grafana | logger=plugins.update.checker t=2024-04-18T23:14:26.199727525Z level=info msg="Update check succeeded" duration=99.179745ms 23:16:52 kafka | [2024-04-18 23:14:55,752] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 grafana | logger=grafana.update.checker t=2024-04-18T23:14:26.200835026Z level=info msg="Update check succeeded" duration=101.215058ms 23:16:52 kafka | [2024-04-18 23:14:55,752] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 grafana | logger=sqlstore.transactions t=2024-04-18T23:14:26.220101322Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:52 kafka | [2024-04-18 23:14:55,752] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 grafana | logger=sqlstore.transactions t=2024-04-18T23:14:26.23092728Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 23:16:52 kafka | [2024-04-18 23:14:55,752] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 grafana | logger=grafana-apiserver t=2024-04-18T23:14:26.367095582Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 23:16:52 kafka | [2024-04-18 23:14:55,752] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 grafana | logger=grafana-apiserver t=2024-04-18T23:14:26.367652163Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 23:16:52 kafka | [2024-04-18 23:14:55,752] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 grafana | logger=provisioning.dashboard t=2024-04-18T23:14:26.414354276Z level=info msg="finished to provision dashboards" 23:16:52 kafka | [2024-04-18 23:14:55,752] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 grafana | logger=infra.usagestats t=2024-04-18T23:15:59.111138484Z level=info msg="Usage stats are ready to report" 23:16:52 kafka | [2024-04-18 23:14:55,753] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,753] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,753] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,753] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,753] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,753] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,753] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,753] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,753] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,753] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,753] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,754] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,755] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,755] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,755] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,755] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,755] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,755] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,755] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,755] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,755] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,755] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,755] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,755] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,756] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,756] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,756] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,756] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,756] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,756] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,756] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,757] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,758] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:52 kafka | [2024-04-18 23:14:55,795] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-ce004af5-7fb1-465b-9dca-4f86ddcfcd1b and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:52 kafka | [2024-04-18 23:14:55,811] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-ce004af5-7fb1-465b-9dca-4f86ddcfcd1b with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:52 kafka | [2024-04-18 23:14:55,861] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group deefd98f-1600-442c-a15a-d2ceba267151 in Empty state. Created a new member id consumer-deefd98f-1600-442c-a15a-d2ceba267151-3-2efa9603-1d5c-4957-829c-f970e337f7f3 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:52 kafka | [2024-04-18 23:14:55,866] INFO [GroupCoordinator 1]: Preparing to rebalance group deefd98f-1600-442c-a15a-d2ceba267151 in state PreparingRebalance with old generation 0 (__consumer_offsets-22) (reason: Adding new member consumer-deefd98f-1600-442c-a15a-d2ceba267151-3-2efa9603-1d5c-4957-829c-f970e337f7f3 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:52 kafka | [2024-04-18 23:14:56,642] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group dbe3acf0-ba50-4571-9b48-e58d24ad2dc5 in Empty state. Created a new member id consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2-b078aa9d-088f-42bc-9dfa-8245e5a83776 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:52 kafka | [2024-04-18 23:14:56,645] INFO [GroupCoordinator 1]: Preparing to rebalance group dbe3acf0-ba50-4571-9b48-e58d24ad2dc5 in state PreparingRebalance with old generation 0 (__consumer_offsets-49) (reason: Adding new member consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2-b078aa9d-088f-42bc-9dfa-8245e5a83776 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:52 kafka | [2024-04-18 23:14:58,824] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:52 kafka | [2024-04-18 23:14:58,847] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-ce004af5-7fb1-465b-9dca-4f86ddcfcd1b for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:52 kafka | [2024-04-18 23:14:58,870] INFO [GroupCoordinator 1]: Stabilized group deefd98f-1600-442c-a15a-d2ceba267151 generation 1 (__consumer_offsets-22) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:52 kafka | [2024-04-18 23:14:58,879] INFO [GroupCoordinator 1]: Assignment received from leader consumer-deefd98f-1600-442c-a15a-d2ceba267151-3-2efa9603-1d5c-4957-829c-f970e337f7f3 for group deefd98f-1600-442c-a15a-d2ceba267151 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:52 kafka | [2024-04-18 23:14:59,647] INFO [GroupCoordinator 1]: Stabilized group dbe3acf0-ba50-4571-9b48-e58d24ad2dc5 generation 1 (__consumer_offsets-49) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:52 kafka | [2024-04-18 23:14:59,664] INFO [GroupCoordinator 1]: Assignment received from leader consumer-dbe3acf0-ba50-4571-9b48-e58d24ad2dc5-2-b078aa9d-088f-42bc-9dfa-8245e5a83776 for group dbe3acf0-ba50-4571-9b48-e58d24ad2dc5 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:52 ++ echo 'Tearing down containers...' 23:16:52 Tearing down containers... 23:16:52 ++ docker-compose down -v --remove-orphans 23:16:52 Stopping policy-apex-pdp ... 23:16:52 Stopping policy-pap ... 23:16:52 Stopping kafka ... 23:16:52 Stopping grafana ... 23:16:52 Stopping policy-api ... 23:16:52 Stopping zookeeper ... 23:16:52 Stopping simulator ... 23:16:52 Stopping mariadb ... 23:16:52 Stopping prometheus ... 23:16:53 Stopping grafana ... done 23:16:53 Stopping prometheus ... done 23:17:03 Stopping policy-apex-pdp ... done 23:17:13 Stopping policy-pap ... done 23:17:13 Stopping simulator ... done 23:17:14 Stopping mariadb ... done 23:17:14 Stopping kafka ... done 23:17:15 Stopping zookeeper ... done 23:17:24 Stopping policy-api ... done 23:17:24 Removing policy-apex-pdp ... 23:17:24 Removing policy-pap ... 23:17:24 Removing kafka ... 23:17:24 Removing grafana ... 23:17:24 Removing policy-api ... 23:17:24 Removing policy-db-migrator ... 23:17:24 Removing zookeeper ... 23:17:24 Removing simulator ... 23:17:24 Removing mariadb ... 23:17:24 Removing prometheus ... 23:17:24 Removing policy-db-migrator ... done 23:17:24 Removing zookeeper ... done 23:17:24 Removing policy-api ... done 23:17:24 Removing prometheus ... done 23:17:24 Removing grafana ... done 23:17:24 Removing policy-pap ... done 23:17:24 Removing policy-apex-pdp ... done 23:17:24 Removing simulator ... done 23:17:24 Removing mariadb ... done 23:17:24 Removing kafka ... done 23:17:24 Removing network compose_default 23:17:24 ++ cd /w/workspace/policy-pap-master-project-csit-pap 23:17:24 + load_set 23:17:24 + _setopts=hxB 23:17:24 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:17:24 ++ tr : ' ' 23:17:24 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:24 + set +o braceexpand 23:17:24 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:24 + set +o hashall 23:17:24 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:24 + set +o interactive-comments 23:17:24 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:24 + set +o xtrace 23:17:24 ++ echo hxB 23:17:24 ++ sed 's/./& /g' 23:17:24 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:24 + set +h 23:17:24 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:24 + set +x 23:17:24 + rsync /w/workspace/policy-pap-master-project-csit-pap/compose/docker_compose.log /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:17:24 + [[ -n /tmp/tmp.K0sYyH3Udx ]] 23:17:24 + rsync -av /tmp/tmp.K0sYyH3Udx/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:17:24 sending incremental file list 23:17:24 ./ 23:17:24 log.html 23:17:24 output.xml 23:17:24 report.html 23:17:24 testplan.txt 23:17:24 23:17:24 sent 918,822 bytes received 95 bytes 1,837,834.00 bytes/sec 23:17:24 total size is 918,276 speedup is 1.00 23:17:24 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:17:24 + exit 0 23:17:24 $ ssh-agent -k 23:17:24 unset SSH_AUTH_SOCK; 23:17:24 unset SSH_AGENT_PID; 23:17:24 echo Agent pid 2086 killed; 23:17:24 [ssh-agent] Stopped. 23:17:24 Robot results publisher started... 23:17:24 INFO: Checking test criticality is deprecated and will be dropped in a future release! 23:17:24 -Parsing output xml: 23:17:25 Done! 23:17:25 WARNING! Could not find file: **/log.html 23:17:25 WARNING! Could not find file: **/report.html 23:17:25 -Copying log files to build dir: 23:17:25 Done! 23:17:25 -Assigning results to build: 23:17:25 Done! 23:17:25 -Checking thresholds: 23:17:25 Done! 23:17:25 Done publishing Robot results. 23:17:25 [PostBuildScript] - [INFO] Executing post build scripts. 23:17:25 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7213542619536222831.sh 23:17:25 ---> sysstat.sh 23:17:25 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17685881978232348851.sh 23:17:25 ---> package-listing.sh 23:17:25 ++ facter osfamily 23:17:25 ++ tr '[:upper:]' '[:lower:]' 23:17:25 + OS_FAMILY=debian 23:17:25 + workspace=/w/workspace/policy-pap-master-project-csit-pap 23:17:25 + START_PACKAGES=/tmp/packages_start.txt 23:17:25 + END_PACKAGES=/tmp/packages_end.txt 23:17:25 + DIFF_PACKAGES=/tmp/packages_diff.txt 23:17:25 + PACKAGES=/tmp/packages_start.txt 23:17:25 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:25 + PACKAGES=/tmp/packages_end.txt 23:17:25 + case "${OS_FAMILY}" in 23:17:25 + dpkg -l 23:17:25 + grep '^ii' 23:17:25 + '[' -f /tmp/packages_start.txt ']' 23:17:25 + '[' -f /tmp/packages_end.txt ']' 23:17:25 + diff /tmp/packages_start.txt /tmp/packages_end.txt 23:17:25 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:25 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:25 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:25 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7965212518294138523.sh 23:17:25 ---> capture-instance-metadata.sh 23:17:25 Setup pyenv: 23:17:26 system 23:17:26 3.8.13 23:17:26 3.9.13 23:17:26 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:26 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-H8Lq from file:/tmp/.os_lf_venv 23:17:27 lf-activate-venv(): INFO: Installing: lftools 23:17:37 lf-activate-venv(): INFO: Adding /tmp/venv-H8Lq/bin to PATH 23:17:37 INFO: Running in OpenStack, capturing instance metadata 23:17:37 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1225904667301443132.sh 23:17:37 provisioning config files... 23:17:37 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config9930047205394287472tmp 23:17:37 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 23:17:37 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 23:17:37 [EnvInject] - Injecting environment variables from a build step. 23:17:37 [EnvInject] - Injecting as environment variables the properties content 23:17:37 SERVER_ID=logs 23:17:37 23:17:37 [EnvInject] - Variables injected successfully. 23:17:37 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins9696737471029010238.sh 23:17:37 ---> create-netrc.sh 23:17:37 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10304794993552994526.sh 23:17:37 ---> python-tools-install.sh 23:17:37 Setup pyenv: 23:17:37 system 23:17:37 3.8.13 23:17:37 3.9.13 23:17:37 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:37 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-H8Lq from file:/tmp/.os_lf_venv 23:17:39 lf-activate-venv(): INFO: Installing: lftools 23:17:47 lf-activate-venv(): INFO: Adding /tmp/venv-H8Lq/bin to PATH 23:17:47 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins18375602438344878837.sh 23:17:47 ---> sudo-logs.sh 23:17:47 Archiving 'sudo' log.. 23:17:47 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins5312943589767163300.sh 23:17:47 ---> job-cost.sh 23:17:47 Setup pyenv: 23:17:47 system 23:17:47 3.8.13 23:17:47 3.9.13 23:17:47 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:47 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-H8Lq from file:/tmp/.os_lf_venv 23:17:48 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 23:17:53 lf-activate-venv(): INFO: Adding /tmp/venv-H8Lq/bin to PATH 23:17:53 INFO: No Stack... 23:17:54 INFO: Retrieving Pricing Info for: v3-standard-8 23:17:54 INFO: Archiving Costs 23:17:54 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins4579478435418763978.sh 23:17:54 ---> logs-deploy.sh 23:17:54 Setup pyenv: 23:17:54 system 23:17:54 3.8.13 23:17:54 3.9.13 23:17:54 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:54 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-H8Lq from file:/tmp/.os_lf_venv 23:17:56 lf-activate-venv(): INFO: Installing: lftools 23:18:05 lf-activate-venv(): INFO: Adding /tmp/venv-H8Lq/bin to PATH 23:18:06 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1650 23:18:06 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 23:18:07 Archives upload complete. 23:18:07 INFO: archiving logs to Nexus 23:18:08 ---> uname -a: 23:18:08 Linux prd-ubuntu1804-docker-8c-8g-24270 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 23:18:08 23:18:08 23:18:08 ---> lscpu: 23:18:08 Architecture: x86_64 23:18:08 CPU op-mode(s): 32-bit, 64-bit 23:18:08 Byte Order: Little Endian 23:18:08 CPU(s): 8 23:18:08 On-line CPU(s) list: 0-7 23:18:08 Thread(s) per core: 1 23:18:08 Core(s) per socket: 1 23:18:08 Socket(s): 8 23:18:08 NUMA node(s): 1 23:18:08 Vendor ID: AuthenticAMD 23:18:08 CPU family: 23 23:18:08 Model: 49 23:18:08 Model name: AMD EPYC-Rome Processor 23:18:08 Stepping: 0 23:18:08 CPU MHz: 2800.000 23:18:08 BogoMIPS: 5600.00 23:18:08 Virtualization: AMD-V 23:18:08 Hypervisor vendor: KVM 23:18:08 Virtualization type: full 23:18:08 L1d cache: 32K 23:18:08 L1i cache: 32K 23:18:08 L2 cache: 512K 23:18:08 L3 cache: 16384K 23:18:08 NUMA node0 CPU(s): 0-7 23:18:08 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 23:18:08 23:18:08 23:18:08 ---> nproc: 23:18:08 8 23:18:08 23:18:08 23:18:08 ---> df -h: 23:18:08 Filesystem Size Used Avail Use% Mounted on 23:18:08 udev 16G 0 16G 0% /dev 23:18:08 tmpfs 3.2G 708K 3.2G 1% /run 23:18:08 /dev/vda1 155G 14G 142G 9% / 23:18:08 tmpfs 16G 0 16G 0% /dev/shm 23:18:08 tmpfs 5.0M 0 5.0M 0% /run/lock 23:18:08 tmpfs 16G 0 16G 0% /sys/fs/cgroup 23:18:08 /dev/vda15 105M 4.4M 100M 5% /boot/efi 23:18:08 tmpfs 3.2G 0 3.2G 0% /run/user/1001 23:18:08 23:18:08 23:18:08 ---> free -m: 23:18:08 total used free shared buff/cache available 23:18:08 Mem: 32167 851 25162 0 6152 30859 23:18:08 Swap: 1023 0 1023 23:18:08 23:18:08 23:18:08 ---> ip addr: 23:18:08 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 23:18:08 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 23:18:08 inet 127.0.0.1/8 scope host lo 23:18:08 valid_lft forever preferred_lft forever 23:18:08 inet6 ::1/128 scope host 23:18:08 valid_lft forever preferred_lft forever 23:18:08 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 23:18:08 link/ether fa:16:3e:c2:5e:c9 brd ff:ff:ff:ff:ff:ff 23:18:08 inet 10.30.106.211/23 brd 10.30.107.255 scope global dynamic ens3 23:18:08 valid_lft 85932sec preferred_lft 85932sec 23:18:08 inet6 fe80::f816:3eff:fec2:5ec9/64 scope link 23:18:08 valid_lft forever preferred_lft forever 23:18:08 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 23:18:08 link/ether 02:42:31:29:96:20 brd ff:ff:ff:ff:ff:ff 23:18:08 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 23:18:08 valid_lft forever preferred_lft forever 23:18:08 23:18:08 23:18:08 ---> sar -b -r -n DEV: 23:18:08 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-24270) 04/18/24 _x86_64_ (8 CPU) 23:18:08 23:18:08 23:10:22 LINUX RESTART (8 CPU) 23:18:08 23:18:08 23:11:01 tps rtps wtps bread/s bwrtn/s 23:18:08 23:12:02 131.49 36.34 95.15 1697.03 58766.28 23:18:08 23:13:01 146.14 23.39 122.76 2799.66 64597.46 23:18:08 23:14:01 214.41 0.38 214.03 47.73 105871.42 23:18:08 23:15:01 363.31 12.86 350.44 792.47 54556.99 23:18:08 23:16:01 6.47 0.00 6.47 0.00 156.84 23:18:08 23:17:01 11.33 0.08 11.25 9.60 1096.88 23:18:08 23:18:01 67.69 1.95 65.74 112.11 2697.75 23:18:08 Average: 134.38 10.69 123.69 775.00 41050.60 23:18:08 23:18:08 23:11:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:18:08 23:12:02 30150376 31722576 2788836 8.47 68784 1814488 1391952 4.10 845628 1651488 145184 23:18:08 23:13:01 29503744 31683932 3435468 10.43 90252 2379860 1546896 4.55 972120 2127584 371488 23:18:08 23:14:01 25871476 31642836 7067736 21.46 137064 5775780 1582264 4.66 1043064 5512024 1287296 23:18:08 23:15:01 23618360 29605984 9320852 28.30 157188 5939728 8761928 25.78 3258996 5455892 1716 23:18:08 23:16:01 23654660 29643056 9284552 28.19 157304 5940012 8678888 25.54 3224492 5454124 228 23:18:08 23:17:01 23685004 29699708 9254208 28.09 157712 5968224 8010568 23.57 3184292 5468460 248 23:18:08 23:18:01 25762312 31595044 7176900 21.79 159804 5800436 1547716 4.55 1321508 5312620 2416 23:18:08 Average: 26035133 30799019 6904079 20.96 132587 4802647 4502887 13.25 1978586 4426027 258368 23:18:08 23:18:08 23:11:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:18:08 23:12:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:08 23:12:02 ens3 81.86 59.16 883.06 11.30 0.00 0.00 0.00 0.00 23:18:08 23:12:02 lo 1.60 1.60 0.18 0.18 0.00 0.00 0.00 0.00 23:18:08 23:13:01 br-629c0108f165 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:08 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:08 23:13:01 ens3 117.39 84.71 2580.05 11.57 0.00 0.00 0.00 0.00 23:18:08 23:13:01 lo 5.22 5.22 0.50 0.50 0.00 0.00 0.00 0.00 23:18:08 23:14:01 br-629c0108f165 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:08 23:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:08 23:14:01 ens3 1138.38 499.87 28858.40 35.76 0.00 0.00 0.00 0.00 23:18:08 23:14:01 lo 8.27 8.27 0.81 0.81 0.00 0.00 0.00 0.00 23:18:08 23:15:01 veth4cdc924 69.14 84.49 41.43 20.59 0.00 0.00 0.00 0.00 23:18:08 23:15:01 veth74f5f70 0.13 0.47 0.01 0.03 0.00 0.00 0.00 0.00 23:18:08 23:15:01 veth5bda329 8.77 9.33 1.28 1.25 0.00 0.00 0.00 0.00 23:18:08 23:15:01 veth2dcc38d 0.55 0.88 0.06 0.31 0.00 0.00 0.00 0.00 23:18:08 23:16:01 veth4cdc924 31.01 37.86 37.83 12.25 0.00 0.00 0.00 0.00 23:18:08 23:16:01 veth74f5f70 0.50 0.47 0.05 1.48 0.00 0.00 0.00 0.00 23:18:08 23:16:01 veth5bda329 15.71 10.93 1.41 1.63 0.00 0.00 0.00 0.00 23:18:08 23:16:01 veth2dcc38d 0.23 0.18 0.02 0.01 0.00 0.00 0.00 0.00 23:18:08 23:17:01 veth4cdc924 0.22 0.30 0.11 0.08 0.00 0.00 0.00 0.00 23:18:08 23:17:01 veth5bda329 13.83 9.35 1.05 1.34 0.00 0.00 0.00 0.00 23:18:08 23:17:01 veth8701818 8.33 11.40 1.40 0.98 0.00 0.00 0.00 0.00 23:18:08 23:17:01 br-629c0108f165 4.27 4.68 1.98 2.19 0.00 0.00 0.00 0.00 23:18:08 23:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:08 23:18:01 ens3 1722.96 903.70 33075.45 146.62 0.00 0.00 0.00 0.00 23:18:08 23:18:01 lo 34.88 34.88 6.22 6.22 0.00 0.00 0.00 0.00 23:18:08 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:08 Average: ens3 201.91 101.15 4633.65 13.80 0.00 0.00 0.00 0.00 23:18:08 Average: lo 4.44 4.44 0.84 0.84 0.00 0.00 0.00 0.00 23:18:08 23:18:08 23:18:08 ---> sar -P ALL: 23:18:08 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-24270) 04/18/24 _x86_64_ (8 CPU) 23:18:08 23:18:08 23:10:22 LINUX RESTART (8 CPU) 23:18:08 23:18:08 23:11:01 CPU %user %nice %system %iowait %steal %idle 23:18:08 23:12:02 all 9.40 0.00 0.80 4.66 0.13 85.01 23:18:08 23:12:02 0 2.62 0.00 0.25 0.17 0.00 96.97 23:18:08 23:12:02 1 27.04 0.00 1.62 3.29 0.03 68.02 23:18:08 23:12:02 2 21.29 0.00 1.20 0.83 0.05 76.63 23:18:08 23:12:02 3 0.42 0.00 0.35 19.75 0.02 79.46 23:18:08 23:12:02 4 9.91 0.00 0.92 1.49 0.03 87.65 23:18:08 23:12:02 5 6.41 0.00 0.77 0.72 0.02 92.09 23:18:08 23:12:02 6 4.82 0.00 0.89 0.47 0.02 93.80 23:18:08 23:12:02 7 2.79 0.00 0.38 10.62 0.81 85.39 23:18:08 23:13:01 all 9.80 0.00 1.00 4.76 0.04 84.41 23:18:08 23:13:01 0 2.49 0.00 0.53 0.17 0.03 96.78 23:18:08 23:13:01 1 3.76 0.00 0.77 0.17 0.03 95.27 23:18:08 23:13:01 2 17.39 0.00 1.09 2.66 0.03 78.83 23:18:08 23:13:01 3 0.27 0.00 0.41 28.01 0.03 71.27 23:18:08 23:13:01 4 27.78 0.00 1.80 2.93 0.07 67.42 23:18:08 23:13:01 5 8.15 0.00 0.97 0.39 0.03 90.46 23:18:08 23:13:01 6 13.68 0.00 1.41 1.35 0.05 83.51 23:18:08 23:13:01 7 4.95 0.00 1.03 2.42 0.03 91.56 23:18:08 23:14:01 all 12.09 0.00 5.66 8.02 0.07 74.16 23:18:08 23:14:01 0 11.18 0.00 6.00 0.85 0.07 81.90 23:18:08 23:14:01 1 10.68 0.00 6.67 33.65 0.07 48.94 23:18:08 23:14:01 2 12.95 0.00 5.36 0.09 0.09 81.52 23:18:08 23:14:01 3 11.85 0.00 5.79 22.82 0.07 59.48 23:18:08 23:14:01 4 10.18 0.00 3.66 0.41 0.07 85.69 23:18:08 23:14:01 5 14.34 0.00 5.58 1.55 0.07 78.47 23:18:08 23:14:01 6 12.58 0.00 5.60 4.67 0.07 77.08 23:18:08 23:14:01 7 12.96 0.00 6.64 0.30 0.07 80.03 23:18:08 23:15:01 all 29.55 0.00 4.33 4.09 0.08 61.96 23:18:08 23:15:01 0 23.63 0.00 4.04 1.55 0.07 70.71 23:18:08 23:15:01 1 34.87 0.00 4.46 2.85 0.08 57.73 23:18:08 23:15:01 2 32.15 0.00 4.23 1.22 0.08 62.32 23:18:08 23:15:01 3 28.56 0.00 4.84 17.77 0.08 48.75 23:18:08 23:15:01 4 26.51 0.00 3.97 1.14 0.07 68.31 23:18:08 23:15:01 5 31.38 0.00 4.16 1.84 0.08 62.54 23:18:08 23:15:01 6 33.22 0.00 4.53 2.62 0.08 59.54 23:18:08 23:15:01 7 26.09 0.00 4.39 3.75 0.07 65.70 23:18:08 23:16:01 all 4.78 0.00 0.42 0.02 0.04 94.73 23:18:08 23:16:01 0 4.14 0.00 0.30 0.00 0.03 95.53 23:18:08 23:16:01 1 5.11 0.00 0.45 0.00 0.03 94.41 23:18:08 23:16:01 2 3.86 0.00 0.42 0.02 0.07 95.64 23:18:08 23:16:01 3 4.68 0.00 0.52 0.02 0.03 94.76 23:18:08 23:16:01 4 4.11 0.00 0.42 0.10 0.05 95.32 23:18:08 23:16:01 5 5.08 0.00 0.40 0.03 0.02 94.47 23:18:08 23:16:01 6 4.95 0.00 0.31 0.00 0.05 94.69 23:18:08 23:16:01 7 6.34 0.00 0.53 0.02 0.05 93.06 23:18:08 23:17:01 all 1.42 0.00 0.32 0.12 0.04 98.10 23:18:08 23:17:01 0 1.07 0.00 0.30 0.02 0.05 98.57 23:18:08 23:17:01 1 1.25 0.00 0.27 0.00 0.03 98.45 23:18:08 23:17:01 2 2.39 0.00 0.47 0.05 0.07 97.03 23:18:08 23:17:01 3 1.92 0.00 0.30 0.07 0.03 97.68 23:18:08 23:17:01 4 1.08 0.00 0.30 0.38 0.05 98.18 23:18:08 23:17:01 5 1.07 0.00 0.33 0.02 0.03 98.55 23:18:08 23:17:01 6 1.16 0.00 0.29 0.23 0.03 98.29 23:18:08 23:17:01 7 1.39 0.00 0.33 0.23 0.05 97.99 23:18:08 23:18:01 all 6.81 0.00 0.63 0.39 0.03 92.14 23:18:08 23:18:01 0 2.49 0.00 0.62 0.03 0.02 96.84 23:18:08 23:18:01 1 10.96 0.00 0.68 0.20 0.03 88.12 23:18:08 23:18:01 2 15.12 0.00 0.75 0.22 0.03 83.88 23:18:08 23:18:01 3 1.52 0.00 0.48 0.08 0.03 97.88 23:18:08 23:18:01 4 2.42 0.00 0.62 2.12 0.05 94.79 23:18:08 23:18:01 5 5.14 0.00 0.58 0.12 0.02 94.15 23:18:08 23:18:01 6 15.78 0.00 0.85 0.07 0.05 83.25 23:18:08 23:18:01 7 1.10 0.00 0.48 0.27 0.02 98.13 23:18:08 Average: all 10.53 0.00 1.87 3.13 0.06 84.40 23:18:08 Average: 0 6.80 0.00 1.71 0.40 0.04 91.06 23:18:08 Average: 1 13.40 0.00 2.12 5.67 0.05 78.76 23:18:08 Average: 2 15.01 0.00 1.92 0.72 0.06 82.29 23:18:08 Average: 3 7.00 0.00 1.80 12.56 0.04 78.59 23:18:08 Average: 4 11.66 0.00 1.66 1.22 0.06 85.40 23:18:08 Average: 5 10.19 0.00 1.82 0.66 0.04 87.30 23:18:08 Average: 6 12.28 0.00 1.97 1.33 0.05 84.36 23:18:08 Average: 7 7.92 0.00 1.96 2.53 0.16 87.43 23:18:08 23:18:08 23:18:08