23:11:01 Started by timer 23:11:01 Running as SYSTEM 23:11:01 [EnvInject] - Loading node environment variables. 23:11:01 Building remotely on prd-ubuntu1804-docker-8c-8g-24474 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 23:11:01 [ssh-agent] Looking for ssh-agent implementation... 23:11:01 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 23:11:01 $ ssh-agent 23:11:01 SSH_AUTH_SOCK=/tmp/ssh-jluXIj60uw5M/agent.2105 23:11:01 SSH_AGENT_PID=2107 23:11:01 [ssh-agent] Started. 23:11:01 Running ssh-add (command line suppressed) 23:11:01 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_1582612683718231627.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_1582612683718231627.key) 23:11:01 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 23:11:01 The recommended git tool is: NONE 23:11:02 using credential onap-jenkins-ssh 23:11:02 Wiping out workspace first. 23:11:02 Cloning the remote Git repository 23:11:03 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 23:11:03 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 23:11:03 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 23:11:03 > git --version # timeout=10 23:11:03 > git --version # 'git version 2.17.1' 23:11:03 using GIT_SSH to set credentials Gerrit user 23:11:03 Verifying host key using manually-configured host key entries 23:11:03 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 23:11:03 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 23:11:03 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 23:11:03 Avoid second fetch 23:11:03 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 23:11:03 Checking out Revision f35d01581c8da55946d604e5a444972fe4b0d318 (refs/remotes/origin/master) 23:11:03 > git config core.sparsecheckout # timeout=10 23:11:03 > git checkout -f f35d01581c8da55946d604e5a444972fe4b0d318 # timeout=30 23:11:04 Commit message: "Improvements to CSIT" 23:11:04 > git rev-list --no-walk f35d01581c8da55946d604e5a444972fe4b0d318 # timeout=10 23:11:04 provisioning config files... 23:11:04 copy managed file [npmrc] to file:/home/jenkins/.npmrc 23:11:04 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 23:11:04 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1736776534242331966.sh 23:11:04 ---> python-tools-install.sh 23:11:04 Setup pyenv: 23:11:04 * system (set by /opt/pyenv/version) 23:11:04 * 3.8.13 (set by /opt/pyenv/version) 23:11:04 * 3.9.13 (set by /opt/pyenv/version) 23:11:04 * 3.10.6 (set by /opt/pyenv/version) 23:11:08 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-mK3E 23:11:08 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 23:11:12 lf-activate-venv(): INFO: Installing: lftools 23:11:48 lf-activate-venv(): INFO: Adding /tmp/venv-mK3E/bin to PATH 23:11:48 Generating Requirements File 23:12:19 Python 3.10.6 23:12:19 pip 24.0 from /tmp/venv-mK3E/lib/python3.10/site-packages/pip (python 3.10) 23:12:20 appdirs==1.4.4 23:12:20 argcomplete==3.3.0 23:12:20 aspy.yaml==1.3.0 23:12:20 attrs==23.2.0 23:12:20 autopage==0.5.2 23:12:20 beautifulsoup4==4.12.3 23:12:20 boto3==1.34.88 23:12:20 botocore==1.34.88 23:12:20 bs4==0.0.2 23:12:20 cachetools==5.3.3 23:12:20 certifi==2024.2.2 23:12:20 cffi==1.16.0 23:12:20 cfgv==3.4.0 23:12:20 chardet==5.2.0 23:12:20 charset-normalizer==3.3.2 23:12:20 click==8.1.7 23:12:20 cliff==4.6.0 23:12:20 cmd2==2.4.3 23:12:20 cryptography==3.3.2 23:12:20 debtcollector==3.0.0 23:12:20 decorator==5.1.1 23:12:20 defusedxml==0.7.1 23:12:20 Deprecated==1.2.14 23:12:20 distlib==0.3.8 23:12:20 dnspython==2.6.1 23:12:20 docker==4.2.2 23:12:20 dogpile.cache==1.3.2 23:12:20 email_validator==2.1.1 23:12:20 filelock==3.13.4 23:12:20 future==1.0.0 23:12:20 gitdb==4.0.11 23:12:20 GitPython==3.1.43 23:12:20 google-auth==2.29.0 23:12:20 httplib2==0.22.0 23:12:20 identify==2.5.35 23:12:20 idna==3.7 23:12:20 importlib-resources==1.5.0 23:12:20 iso8601==2.1.0 23:12:20 Jinja2==3.1.3 23:12:20 jmespath==1.0.1 23:12:20 jsonpatch==1.33 23:12:20 jsonpointer==2.4 23:12:20 jsonschema==4.21.1 23:12:20 jsonschema-specifications==2023.12.1 23:12:20 keystoneauth1==5.6.0 23:12:20 kubernetes==29.0.0 23:12:20 lftools==0.37.10 23:12:20 lxml==5.2.1 23:12:20 MarkupSafe==2.1.5 23:12:20 msgpack==1.0.8 23:12:20 multi_key_dict==2.0.3 23:12:20 munch==4.0.0 23:12:20 netaddr==1.2.1 23:12:20 netifaces==0.11.0 23:12:20 niet==1.4.2 23:12:20 nodeenv==1.8.0 23:12:20 oauth2client==4.1.3 23:12:20 oauthlib==3.2.2 23:12:20 openstacksdk==3.1.0 23:12:20 os-client-config==2.1.0 23:12:20 os-service-types==1.7.0 23:12:20 osc-lib==3.0.1 23:12:20 oslo.config==9.4.0 23:12:20 oslo.context==5.5.0 23:12:20 oslo.i18n==6.3.0 23:12:20 oslo.log==5.5.1 23:12:20 oslo.serialization==5.4.0 23:12:20 oslo.utils==7.1.0 23:12:20 packaging==24.0 23:12:20 pbr==6.0.0 23:12:20 platformdirs==4.2.0 23:12:20 prettytable==3.10.0 23:12:20 pyasn1==0.6.0 23:12:20 pyasn1_modules==0.4.0 23:12:20 pycparser==2.22 23:12:20 pygerrit2==2.0.15 23:12:20 PyGithub==2.3.0 23:12:20 pyinotify==0.9.6 23:12:20 PyJWT==2.8.0 23:12:20 PyNaCl==1.5.0 23:12:20 pyparsing==2.4.7 23:12:20 pyperclip==1.8.2 23:12:20 pyrsistent==0.20.0 23:12:20 python-cinderclient==9.5.0 23:12:20 python-dateutil==2.9.0.post0 23:12:20 python-heatclient==3.5.0 23:12:20 python-jenkins==1.8.2 23:12:20 python-keystoneclient==5.4.0 23:12:20 python-magnumclient==4.4.0 23:12:20 python-novaclient==18.6.0 23:12:20 python-openstackclient==6.6.0 23:12:20 python-swiftclient==4.5.0 23:12:20 PyYAML==6.0.1 23:12:20 referencing==0.34.0 23:12:20 requests==2.31.0 23:12:20 requests-oauthlib==2.0.0 23:12:20 requestsexceptions==1.4.0 23:12:20 rfc3986==2.0.0 23:12:20 rpds-py==0.18.0 23:12:20 rsa==4.9 23:12:20 ruamel.yaml==0.18.6 23:12:20 ruamel.yaml.clib==0.2.8 23:12:20 s3transfer==0.10.1 23:12:20 simplejson==3.19.2 23:12:20 six==1.16.0 23:12:20 smmap==5.0.1 23:12:20 soupsieve==2.5 23:12:20 stevedore==5.2.0 23:12:20 tabulate==0.9.0 23:12:20 toml==0.10.2 23:12:20 tomlkit==0.12.4 23:12:20 tqdm==4.66.2 23:12:20 typing_extensions==4.11.0 23:12:20 tzdata==2024.1 23:12:20 urllib3==1.26.18 23:12:20 virtualenv==20.25.3 23:12:20 wcwidth==0.2.13 23:12:20 websocket-client==1.7.0 23:12:20 wrapt==1.16.0 23:12:20 xdg==6.0.0 23:12:20 xmltodict==0.13.0 23:12:20 yq==3.4.1 23:12:20 [EnvInject] - Injecting environment variables from a build step. 23:12:20 [EnvInject] - Injecting as environment variables the properties content 23:12:20 SET_JDK_VERSION=openjdk17 23:12:20 GIT_URL="git://cloud.onap.org/mirror" 23:12:20 23:12:20 [EnvInject] - Variables injected successfully. 23:12:20 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins9883845771782346569.sh 23:12:20 ---> update-java-alternatives.sh 23:12:20 ---> Updating Java version 23:12:20 ---> Ubuntu/Debian system detected 23:12:20 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 23:12:20 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 23:12:20 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 23:12:20 openjdk version "17.0.4" 2022-07-19 23:12:20 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 23:12:20 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 23:12:21 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 23:12:21 [EnvInject] - Injecting environment variables from a build step. 23:12:21 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 23:12:21 [EnvInject] - Variables injected successfully. 23:12:21 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins2008288971592734703.sh 23:12:21 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 23:12:21 + set +u 23:12:21 + save_set 23:12:21 + RUN_CSIT_SAVE_SET=ehxB 23:12:21 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 23:12:21 + '[' 1 -eq 0 ']' 23:12:21 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:21 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:21 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:21 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:21 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:21 + export ROBOT_VARIABLES= 23:12:21 + ROBOT_VARIABLES= 23:12:21 + export PROJECT=pap 23:12:21 + PROJECT=pap 23:12:21 + cd /w/workspace/policy-pap-master-project-csit-pap 23:12:21 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:21 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:21 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:21 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 23:12:21 + relax_set 23:12:21 + set +e 23:12:21 + set +o pipefail 23:12:21 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:21 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:21 +++ mktemp -d 23:12:21 ++ ROBOT_VENV=/tmp/tmp.zWKE8vd9jz 23:12:21 ++ echo ROBOT_VENV=/tmp/tmp.zWKE8vd9jz 23:12:21 +++ python3 --version 23:12:21 ++ echo 'Python version is: Python 3.6.9' 23:12:21 Python version is: Python 3.6.9 23:12:21 ++ python3 -m venv --clear /tmp/tmp.zWKE8vd9jz 23:12:22 ++ source /tmp/tmp.zWKE8vd9jz/bin/activate 23:12:22 +++ deactivate nondestructive 23:12:22 +++ '[' -n '' ']' 23:12:22 +++ '[' -n '' ']' 23:12:22 +++ '[' -n /bin/bash -o -n '' ']' 23:12:22 +++ hash -r 23:12:22 +++ '[' -n '' ']' 23:12:22 +++ unset VIRTUAL_ENV 23:12:22 +++ '[' '!' nondestructive = nondestructive ']' 23:12:22 +++ VIRTUAL_ENV=/tmp/tmp.zWKE8vd9jz 23:12:22 +++ export VIRTUAL_ENV 23:12:22 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:22 +++ PATH=/tmp/tmp.zWKE8vd9jz/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:22 +++ export PATH 23:12:22 +++ '[' -n '' ']' 23:12:22 +++ '[' -z '' ']' 23:12:22 +++ _OLD_VIRTUAL_PS1= 23:12:22 +++ '[' 'x(tmp.zWKE8vd9jz) ' '!=' x ']' 23:12:22 +++ PS1='(tmp.zWKE8vd9jz) ' 23:12:22 +++ export PS1 23:12:22 +++ '[' -n /bin/bash -o -n '' ']' 23:12:22 +++ hash -r 23:12:22 ++ set -exu 23:12:22 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 23:12:25 ++ echo 'Installing Python Requirements' 23:12:25 Installing Python Requirements 23:12:25 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 23:12:44 ++ python3 -m pip -qq freeze 23:12:44 bcrypt==4.0.1 23:12:44 beautifulsoup4==4.12.3 23:12:44 bitarray==2.9.2 23:12:44 certifi==2024.2.2 23:12:44 cffi==1.15.1 23:12:44 charset-normalizer==2.0.12 23:12:44 cryptography==40.0.2 23:12:44 decorator==5.1.1 23:12:44 elasticsearch==7.17.9 23:12:44 elasticsearch-dsl==7.4.1 23:12:44 enum34==1.1.10 23:12:44 idna==3.7 23:12:44 importlib-resources==5.4.0 23:12:44 ipaddr==2.2.0 23:12:44 isodate==0.6.1 23:12:44 jmespath==0.10.0 23:12:44 jsonpatch==1.32 23:12:44 jsonpath-rw==1.4.0 23:12:44 jsonpointer==2.3 23:12:44 lxml==5.2.1 23:12:44 netaddr==0.8.0 23:12:44 netifaces==0.11.0 23:12:44 odltools==0.1.28 23:12:44 paramiko==3.4.0 23:12:44 pkg_resources==0.0.0 23:12:44 ply==3.11 23:12:44 pyang==2.6.0 23:12:44 pyangbind==0.8.1 23:12:44 pycparser==2.21 23:12:44 pyhocon==0.3.60 23:12:44 PyNaCl==1.5.0 23:12:44 pyparsing==3.1.2 23:12:44 python-dateutil==2.9.0.post0 23:12:44 regex==2023.8.8 23:12:44 requests==2.27.1 23:12:44 robotframework==6.1.1 23:12:44 robotframework-httplibrary==0.4.2 23:12:44 robotframework-pythonlibcore==3.0.0 23:12:44 robotframework-requests==0.9.4 23:12:44 robotframework-selenium2library==3.0.0 23:12:44 robotframework-seleniumlibrary==5.1.3 23:12:44 robotframework-sshlibrary==3.8.0 23:12:44 scapy==2.5.0 23:12:44 scp==0.14.5 23:12:44 selenium==3.141.0 23:12:44 six==1.16.0 23:12:44 soupsieve==2.3.2.post1 23:12:44 urllib3==1.26.18 23:12:44 waitress==2.0.0 23:12:44 WebOb==1.8.7 23:12:44 WebTest==3.0.0 23:12:44 zipp==3.6.0 23:12:44 ++ mkdir -p /tmp/tmp.zWKE8vd9jz/src/onap 23:12:44 ++ rm -rf /tmp/tmp.zWKE8vd9jz/src/onap/testsuite 23:12:44 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 23:12:50 ++ echo 'Installing python confluent-kafka library' 23:12:50 Installing python confluent-kafka library 23:12:50 ++ python3 -m pip install -qq confluent-kafka 23:12:51 ++ echo 'Uninstall docker-py and reinstall docker.' 23:12:51 Uninstall docker-py and reinstall docker. 23:12:51 ++ python3 -m pip uninstall -y -qq docker 23:12:52 ++ python3 -m pip install -U -qq docker 23:12:53 ++ python3 -m pip -qq freeze 23:12:53 bcrypt==4.0.1 23:12:53 beautifulsoup4==4.12.3 23:12:53 bitarray==2.9.2 23:12:53 certifi==2024.2.2 23:12:53 cffi==1.15.1 23:12:53 charset-normalizer==2.0.12 23:12:53 confluent-kafka==2.3.0 23:12:53 cryptography==40.0.2 23:12:53 decorator==5.1.1 23:12:53 deepdiff==5.7.0 23:12:53 dnspython==2.2.1 23:12:53 docker==5.0.3 23:12:53 elasticsearch==7.17.9 23:12:53 elasticsearch-dsl==7.4.1 23:12:53 enum34==1.1.10 23:12:53 future==1.0.0 23:12:53 idna==3.7 23:12:53 importlib-resources==5.4.0 23:12:53 ipaddr==2.2.0 23:12:53 isodate==0.6.1 23:12:53 Jinja2==3.0.3 23:12:53 jmespath==0.10.0 23:12:53 jsonpatch==1.32 23:12:53 jsonpath-rw==1.4.0 23:12:53 jsonpointer==2.3 23:12:53 kafka-python==2.0.2 23:12:53 lxml==5.2.1 23:12:53 MarkupSafe==2.0.1 23:12:53 more-itertools==5.0.0 23:12:53 netaddr==0.8.0 23:12:53 netifaces==0.11.0 23:12:53 odltools==0.1.28 23:12:53 ordered-set==4.0.2 23:12:53 paramiko==3.4.0 23:12:53 pbr==6.0.0 23:12:53 pkg_resources==0.0.0 23:12:53 ply==3.11 23:12:53 protobuf==3.19.6 23:12:53 pyang==2.6.0 23:12:53 pyangbind==0.8.1 23:12:53 pycparser==2.21 23:12:53 pyhocon==0.3.60 23:12:53 PyNaCl==1.5.0 23:12:53 pyparsing==3.1.2 23:12:53 python-dateutil==2.9.0.post0 23:12:53 PyYAML==6.0.1 23:12:53 regex==2023.8.8 23:12:53 requests==2.27.1 23:12:53 robotframework==6.1.1 23:12:53 robotframework-httplibrary==0.4.2 23:12:53 robotframework-onap==0.6.0.dev105 23:12:53 robotframework-pythonlibcore==3.0.0 23:12:53 robotframework-requests==0.9.4 23:12:53 robotframework-selenium2library==3.0.0 23:12:53 robotframework-seleniumlibrary==5.1.3 23:12:53 robotframework-sshlibrary==3.8.0 23:12:53 robotlibcore-temp==1.0.2 23:12:53 scapy==2.5.0 23:12:53 scp==0.14.5 23:12:53 selenium==3.141.0 23:12:53 six==1.16.0 23:12:53 soupsieve==2.3.2.post1 23:12:53 urllib3==1.26.18 23:12:53 waitress==2.0.0 23:12:53 WebOb==1.8.7 23:12:53 websocket-client==1.3.1 23:12:53 WebTest==3.0.0 23:12:53 zipp==3.6.0 23:12:53 ++ uname 23:12:53 ++ grep -q Linux 23:12:53 ++ sudo apt-get -y -qq install libxml2-utils 23:12:53 + load_set 23:12:53 + _setopts=ehuxB 23:12:53 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 23:12:53 ++ tr : ' ' 23:12:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:53 + set +o braceexpand 23:12:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:53 + set +o hashall 23:12:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:53 + set +o interactive-comments 23:12:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:53 + set +o nounset 23:12:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:53 + set +o xtrace 23:12:53 ++ echo ehuxB 23:12:53 ++ sed 's/./& /g' 23:12:53 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:53 + set +e 23:12:53 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:53 + set +h 23:12:53 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:53 + set +u 23:12:53 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:53 + set +x 23:12:53 + source_safely /tmp/tmp.zWKE8vd9jz/bin/activate 23:12:53 + '[' -z /tmp/tmp.zWKE8vd9jz/bin/activate ']' 23:12:53 + relax_set 23:12:53 + set +e 23:12:53 + set +o pipefail 23:12:53 + . /tmp/tmp.zWKE8vd9jz/bin/activate 23:12:53 ++ deactivate nondestructive 23:12:53 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 23:12:53 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:53 ++ export PATH 23:12:53 ++ unset _OLD_VIRTUAL_PATH 23:12:53 ++ '[' -n '' ']' 23:12:53 ++ '[' -n /bin/bash -o -n '' ']' 23:12:53 ++ hash -r 23:12:53 ++ '[' -n '' ']' 23:12:53 ++ unset VIRTUAL_ENV 23:12:53 ++ '[' '!' nondestructive = nondestructive ']' 23:12:53 ++ VIRTUAL_ENV=/tmp/tmp.zWKE8vd9jz 23:12:53 ++ export VIRTUAL_ENV 23:12:53 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:53 ++ PATH=/tmp/tmp.zWKE8vd9jz/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:53 ++ export PATH 23:12:53 ++ '[' -n '' ']' 23:12:53 ++ '[' -z '' ']' 23:12:53 ++ _OLD_VIRTUAL_PS1='(tmp.zWKE8vd9jz) ' 23:12:53 ++ '[' 'x(tmp.zWKE8vd9jz) ' '!=' x ']' 23:12:53 ++ PS1='(tmp.zWKE8vd9jz) (tmp.zWKE8vd9jz) ' 23:12:53 ++ export PS1 23:12:53 ++ '[' -n /bin/bash -o -n '' ']' 23:12:53 ++ hash -r 23:12:53 + load_set 23:12:53 + _setopts=hxB 23:12:53 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:12:53 ++ tr : ' ' 23:12:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:53 + set +o braceexpand 23:12:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:53 + set +o hashall 23:12:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:53 + set +o interactive-comments 23:12:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:53 + set +o xtrace 23:12:53 ++ echo hxB 23:12:53 ++ sed 's/./& /g' 23:12:53 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:53 + set +h 23:12:53 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:53 + set +x 23:12:53 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:53 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:53 + export TEST_OPTIONS= 23:12:53 + TEST_OPTIONS= 23:12:53 ++ mktemp -d 23:12:53 + WORKDIR=/tmp/tmp.heIXjbTZZR 23:12:53 + cd /tmp/tmp.heIXjbTZZR 23:12:53 + docker login -u docker -p docker nexus3.onap.org:10001 23:12:54 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 23:12:54 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 23:12:54 Configure a credential helper to remove this warning. See 23:12:54 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 23:12:54 23:12:54 Login Succeeded 23:12:54 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:54 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:54 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 23:12:54 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:54 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:54 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:54 + relax_set 23:12:54 + set +e 23:12:54 + set +o pipefail 23:12:54 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:54 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 23:12:54 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:54 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 23:12:54 +++ GERRIT_BRANCH=master 23:12:54 +++ echo GERRIT_BRANCH=master 23:12:54 GERRIT_BRANCH=master 23:12:54 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:12:54 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 23:12:54 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 23:12:54 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 23:12:55 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:55 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:55 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:55 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:55 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:55 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:55 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 23:12:55 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:55 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:12:55 +++ grafana=false 23:12:55 +++ gui=false 23:12:55 +++ [[ 2 -gt 0 ]] 23:12:55 +++ key=apex-pdp 23:12:55 +++ case $key in 23:12:55 +++ echo apex-pdp 23:12:55 apex-pdp 23:12:55 +++ component=apex-pdp 23:12:55 +++ shift 23:12:55 +++ [[ 1 -gt 0 ]] 23:12:55 +++ key=--grafana 23:12:55 +++ case $key in 23:12:55 +++ grafana=true 23:12:55 +++ shift 23:12:55 +++ [[ 0 -gt 0 ]] 23:12:55 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:12:55 +++ echo 'Configuring docker compose...' 23:12:55 Configuring docker compose... 23:12:55 +++ source export-ports.sh 23:12:55 +++ source get-versions.sh 23:12:57 +++ '[' -z pap ']' 23:12:57 +++ '[' -n apex-pdp ']' 23:12:57 +++ '[' apex-pdp == logs ']' 23:12:57 +++ '[' true = true ']' 23:12:57 +++ echo 'Starting apex-pdp application with Grafana' 23:12:57 Starting apex-pdp application with Grafana 23:12:57 +++ docker-compose up -d apex-pdp grafana 23:12:58 Creating network "compose_default" with the default driver 23:12:58 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 23:12:58 latest: Pulling from prom/prometheus 23:13:02 Digest: sha256:4f6c47e39a9064028766e8c95890ed15690c30f00c4ba14e7ce6ae1ded0295b1 23:13:02 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 23:13:02 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 23:13:02 latest: Pulling from grafana/grafana 23:13:07 Digest: sha256:7d5faae481a4c6f436c99e98af11534f7fd5e8d3e35213552dd1dd02bc393d2e 23:13:07 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 23:13:07 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 23:13:07 10.10.2: Pulling from mariadb 23:13:12 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 23:13:12 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 23:13:12 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 23:13:12 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator 23:13:16 Digest: sha256:d8f1d8ae67fc0b53114a44577cb43c90a3a3281908d2f2418d7fbd203413bd6a 23:13:16 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT 23:13:16 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 23:13:16 latest: Pulling from confluentinc/cp-zookeeper 23:13:28 Digest: sha256:4dc780642bfc5ec3a2d4901e2ff1f9ddef7f7c5c0b793e1e2911cbfb4e3a3214 23:13:28 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 23:13:28 Pulling kafka (confluentinc/cp-kafka:latest)... 23:13:28 latest: Pulling from confluentinc/cp-kafka 23:13:32 Digest: sha256:620734d9fc0bb1f9886932e5baf33806074469f40e3fe246a3fdbb59309535fa 23:13:32 Status: Downloaded newer image for confluentinc/cp-kafka:latest 23:13:32 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 23:13:32 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator 23:14:06 Digest: sha256:76f202a4ce3fb449efc5539e6f77655fea2bbfecb1fbc1342810b45a9f33c637 23:14:06 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT 23:14:06 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 23:14:07 3.1.2-SNAPSHOT: Pulling from onap/policy-api 23:14:09 Digest: sha256:0e8cbccfee833c5b2be68d71dd51902b884e77df24bbbac2751693f58bdc20ce 23:14:09 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT 23:14:09 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 23:14:09 3.1.2-SNAPSHOT: Pulling from onap/policy-pap 23:14:12 Digest: sha256:4424490684da433df5069c1f1dbbafe83fffd4c8b6a174807fb10d6443ecef06 23:14:12 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT 23:14:12 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 23:14:13 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp 23:14:19 Digest: sha256:75a74a87b7345e553563fbe2ececcd2285ed9500fd91489d9968ae81123b9982 23:14:19 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT 23:14:19 Creating simulator ... 23:14:19 Creating prometheus ... 23:14:19 Creating zookeeper ... 23:14:19 Creating mariadb ... 23:14:34 Creating mariadb ... done 23:14:34 Creating policy-db-migrator ... 23:14:35 Creating policy-db-migrator ... done 23:14:35 Creating policy-api ... 23:14:36 Creating policy-api ... done 23:14:37 Creating simulator ... done 23:14:38 Creating prometheus ... done 23:14:38 Creating grafana ... 23:14:39 Creating zookeeper ... done 23:14:39 Creating kafka ... 23:14:41 Creating grafana ... done 23:14:42 Creating kafka ... done 23:14:42 Creating policy-pap ... 23:14:43 Creating policy-pap ... done 23:14:43 Creating policy-apex-pdp ... 23:14:46 Creating policy-apex-pdp ... done 23:14:46 +++ echo 'Prometheus server: http://localhost:30259' 23:14:46 Prometheus server: http://localhost:30259 23:14:46 +++ echo 'Grafana server: http://localhost:30269' 23:14:46 Grafana server: http://localhost:30269 23:14:46 +++ cd /w/workspace/policy-pap-master-project-csit-pap 23:14:46 ++ sleep 10 23:14:56 ++ unset http_proxy https_proxy 23:14:56 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 23:14:56 Waiting for REST to come up on localhost port 30003... 23:14:56 NAMES STATUS 23:14:56 policy-apex-pdp Up 10 seconds 23:14:56 policy-pap Up 12 seconds 23:14:56 kafka Up 14 seconds 23:14:56 grafana Up 15 seconds 23:14:56 policy-api Up 20 seconds 23:14:56 policy-db-migrator Up 21 seconds 23:14:56 zookeeper Up 16 seconds 23:14:56 mariadb Up 22 seconds 23:14:56 prometheus Up 17 seconds 23:14:56 simulator Up 18 seconds 23:15:01 NAMES STATUS 23:15:01 policy-apex-pdp Up 15 seconds 23:15:01 policy-pap Up 18 seconds 23:15:01 kafka Up 19 seconds 23:15:01 grafana Up 20 seconds 23:15:01 policy-api Up 25 seconds 23:15:01 policy-db-migrator Up 26 seconds 23:15:01 zookeeper Up 21 seconds 23:15:01 mariadb Up 27 seconds 23:15:01 prometheus Up 22 seconds 23:15:01 simulator Up 24 seconds 23:15:06 NAMES STATUS 23:15:06 policy-apex-pdp Up 20 seconds 23:15:06 policy-pap Up 23 seconds 23:15:06 kafka Up 24 seconds 23:15:06 grafana Up 25 seconds 23:15:06 policy-api Up 30 seconds 23:15:06 policy-db-migrator Up 31 seconds 23:15:06 zookeeper Up 26 seconds 23:15:06 mariadb Up 32 seconds 23:15:06 prometheus Up 28 seconds 23:15:06 simulator Up 29 seconds 23:15:11 NAMES STATUS 23:15:11 policy-apex-pdp Up 25 seconds 23:15:11 policy-pap Up 28 seconds 23:15:11 kafka Up 29 seconds 23:15:11 grafana Up 30 seconds 23:15:11 policy-api Up 35 seconds 23:15:11 zookeeper Up 32 seconds 23:15:11 mariadb Up 37 seconds 23:15:11 prometheus Up 33 seconds 23:15:11 simulator Up 34 seconds 23:15:16 NAMES STATUS 23:15:16 policy-apex-pdp Up 30 seconds 23:15:16 policy-pap Up 33 seconds 23:15:16 kafka Up 34 seconds 23:15:16 grafana Up 35 seconds 23:15:16 policy-api Up 40 seconds 23:15:16 zookeeper Up 37 seconds 23:15:16 mariadb Up 42 seconds 23:15:16 prometheus Up 38 seconds 23:15:16 simulator Up 39 seconds 23:15:22 NAMES STATUS 23:15:22 policy-apex-pdp Up 35 seconds 23:15:22 policy-pap Up 38 seconds 23:15:22 kafka Up 39 seconds 23:15:22 grafana Up 40 seconds 23:15:22 policy-api Up 45 seconds 23:15:22 zookeeper Up 42 seconds 23:15:22 mariadb Up 47 seconds 23:15:22 prometheus Up 43 seconds 23:15:22 simulator Up 44 seconds 23:15:27 NAMES STATUS 23:15:27 policy-apex-pdp Up 40 seconds 23:15:27 policy-pap Up 43 seconds 23:15:27 kafka Up 44 seconds 23:15:27 grafana Up 45 seconds 23:15:27 policy-api Up 50 seconds 23:15:27 zookeeper Up 47 seconds 23:15:27 mariadb Up 52 seconds 23:15:27 prometheus Up 48 seconds 23:15:27 simulator Up 49 seconds 23:15:32 NAMES STATUS 23:15:32 policy-apex-pdp Up 45 seconds 23:15:32 policy-pap Up 48 seconds 23:15:32 kafka Up 49 seconds 23:15:32 grafana Up 50 seconds 23:15:32 policy-api Up 55 seconds 23:15:32 zookeeper Up 52 seconds 23:15:32 mariadb Up 57 seconds 23:15:32 prometheus Up 53 seconds 23:15:32 simulator Up 54 seconds 23:15:37 NAMES STATUS 23:15:37 policy-apex-pdp Up 50 seconds 23:15:37 policy-pap Up 53 seconds 23:15:37 kafka Up 54 seconds 23:15:37 grafana Up 55 seconds 23:15:37 policy-api Up About a minute 23:15:37 zookeeper Up 57 seconds 23:15:37 mariadb Up About a minute 23:15:37 prometheus Up 58 seconds 23:15:37 simulator Up 59 seconds 23:15:37 ++ export 'SUITES=pap-test.robot 23:15:37 pap-slas.robot' 23:15:37 ++ SUITES='pap-test.robot 23:15:37 pap-slas.robot' 23:15:37 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:15:37 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:15:37 + load_set 23:15:37 + _setopts=hxB 23:15:37 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:15:37 ++ tr : ' ' 23:15:37 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:15:37 + set +o braceexpand 23:15:37 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:15:37 + set +o hashall 23:15:37 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:15:37 + set +o interactive-comments 23:15:37 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:15:37 + set +o xtrace 23:15:37 ++ echo hxB 23:15:37 ++ sed 's/./& /g' 23:15:37 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:15:37 + set +h 23:15:37 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:15:37 + set +x 23:15:37 + docker_stats 23:15:37 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 23:15:37 ++ uname -s 23:15:37 + '[' Linux == Darwin ']' 23:15:37 + sh -c 'top -bn1 | head -3' 23:15:37 top - 23:15:37 up 5 min, 0 users, load average: 4.54, 2.01, 0.81 23:15:37 Tasks: 207 total, 1 running, 131 sleeping, 0 stopped, 0 zombie 23:15:37 %Cpu(s): 11.5 us, 2.4 sy, 0.0 ni, 79.1 id, 6.9 wa, 0.0 hi, 0.1 si, 0.1 st 23:15:37 + echo 23:15:37 + sh -c 'free -h' 23:15:37 23:15:37 total used free shared buff/cache available 23:15:37 Mem: 31G 2.6G 22G 1.3M 6.2G 28G 23:15:37 Swap: 1.0G 0B 1.0G 23:15:37 + echo 23:15:37 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:15:37 23:15:37 NAMES STATUS 23:15:37 policy-apex-pdp Up 50 seconds 23:15:37 policy-pap Up 53 seconds 23:15:37 kafka Up 55 seconds 23:15:37 grafana Up 56 seconds 23:15:37 policy-api Up About a minute 23:15:37 zookeeper Up 57 seconds 23:15:37 mariadb Up About a minute 23:15:37 prometheus Up 58 seconds 23:15:37 simulator Up 59 seconds 23:15:37 + echo 23:15:37 + docker stats --no-stream 23:15:37 23:15:40 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:15:40 8b02d48ab3e1 policy-apex-pdp 6.18% 174.1MiB / 31.41GiB 0.54% 13.4kB / 15.1kB 0B / 0B 48 23:15:40 6be5e36accf0 policy-pap 31.09% 538.6MiB / 31.41GiB 1.67% 44kB / 47.6kB 0B / 149MB 62 23:15:40 12cfa2c6678f kafka 75.03% 382.4MiB / 31.41GiB 1.19% 98kB / 96.9kB 0B / 508kB 84 23:15:40 e4f073fe59f0 grafana 0.04% 52.97MiB / 31.41GiB 0.16% 18.8kB / 3.51kB 0B / 24.9MB 19 23:15:40 1f3a8f792c3a policy-api 0.13% 463.6MiB / 31.41GiB 1.44% 991kB / 649kB 0B / 0B 52 23:15:40 6ccbf7d53aa0 zookeeper 8.40% 103.6MiB / 31.41GiB 0.32% 64.5kB / 58.5kB 0B / 336kB 60 23:15:40 5e02a3d783a0 mariadb 0.02% 101.5MiB / 31.41GiB 0.32% 936kB / 1.19MB 10.9MB / 51.9MB 37 23:15:40 1eafbbd8d8d0 prometheus 0.14% 18.29MiB / 31.41GiB 0.06% 1.28kB / 158B 0B / 0B 10 23:15:40 8e34b4219017 simulator 0.22% 121.8MiB / 31.41GiB 0.38% 1.26kB / 0B 127kB / 0B 77 23:15:40 + echo 23:15:40 23:15:40 + cd /tmp/tmp.heIXjbTZZR 23:15:40 + echo 'Reading the testplan:' 23:15:40 Reading the testplan: 23:15:40 + echo 'pap-test.robot 23:15:40 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 23:15:40 pap-slas.robot' 23:15:40 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 23:15:40 + cat testplan.txt 23:15:40 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 23:15:40 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:15:40 ++ xargs 23:15:40 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 23:15:40 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:15:40 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:15:40 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:15:40 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:15:40 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 23:15:40 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 23:15:40 + relax_set 23:15:40 + set +e 23:15:40 + set +o pipefail 23:15:40 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:15:40 ============================================================================== 23:15:40 pap 23:15:40 ============================================================================== 23:15:40 pap.Pap-Test 23:15:40 ============================================================================== 23:15:41 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 23:15:41 ------------------------------------------------------------------------------ 23:15:41 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 23:15:41 ------------------------------------------------------------------------------ 23:15:42 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 23:15:42 ------------------------------------------------------------------------------ 23:15:42 Healthcheck :: Verify policy pap health check | PASS | 23:15:42 ------------------------------------------------------------------------------ 23:16:02 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 23:16:02 ------------------------------------------------------------------------------ 23:16:02 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 23:16:02 ------------------------------------------------------------------------------ 23:16:03 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 23:16:03 ------------------------------------------------------------------------------ 23:16:03 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 23:16:03 ------------------------------------------------------------------------------ 23:16:03 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 23:16:03 ------------------------------------------------------------------------------ 23:16:04 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 23:16:04 ------------------------------------------------------------------------------ 23:16:04 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 23:16:04 ------------------------------------------------------------------------------ 23:16:04 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 23:16:04 ------------------------------------------------------------------------------ 23:16:04 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 23:16:04 ------------------------------------------------------------------------------ 23:16:04 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 23:16:04 ------------------------------------------------------------------------------ 23:16:05 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 23:16:05 ------------------------------------------------------------------------------ 23:16:05 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 23:16:05 ------------------------------------------------------------------------------ 23:16:05 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 23:16:05 ------------------------------------------------------------------------------ 23:16:25 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 23:16:25 ------------------------------------------------------------------------------ 23:16:25 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 23:16:25 ------------------------------------------------------------------------------ 23:16:26 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 23:16:26 ------------------------------------------------------------------------------ 23:16:26 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 23:16:26 ------------------------------------------------------------------------------ 23:16:26 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 23:16:26 ------------------------------------------------------------------------------ 23:16:26 pap.Pap-Test | PASS | 23:16:26 22 tests, 22 passed, 0 failed 23:16:26 ============================================================================== 23:16:26 pap.Pap-Slas 23:16:26 ============================================================================== 23:17:26 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 23:17:26 ------------------------------------------------------------------------------ 23:17:26 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 23:17:26 ------------------------------------------------------------------------------ 23:17:26 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 23:17:26 ------------------------------------------------------------------------------ 23:17:26 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 23:17:26 ------------------------------------------------------------------------------ 23:17:26 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 23:17:26 ------------------------------------------------------------------------------ 23:17:26 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 23:17:26 ------------------------------------------------------------------------------ 23:17:26 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 23:17:26 ------------------------------------------------------------------------------ 23:17:26 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 23:17:26 ------------------------------------------------------------------------------ 23:17:26 pap.Pap-Slas | PASS | 23:17:26 8 tests, 8 passed, 0 failed 23:17:26 ============================================================================== 23:17:26 pap | PASS | 23:17:26 30 tests, 30 passed, 0 failed 23:17:26 ============================================================================== 23:17:26 Output: /tmp/tmp.heIXjbTZZR/output.xml 23:17:26 Log: /tmp/tmp.heIXjbTZZR/log.html 23:17:26 Report: /tmp/tmp.heIXjbTZZR/report.html 23:17:26 + RESULT=0 23:17:26 + load_set 23:17:26 + _setopts=hxB 23:17:26 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:17:26 ++ tr : ' ' 23:17:26 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:26 + set +o braceexpand 23:17:26 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:26 + set +o hashall 23:17:26 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:26 + set +o interactive-comments 23:17:26 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:26 + set +o xtrace 23:17:26 ++ echo hxB 23:17:26 ++ sed 's/./& /g' 23:17:26 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:26 + set +h 23:17:26 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:26 + set +x 23:17:26 + echo 'RESULT: 0' 23:17:26 RESULT: 0 23:17:26 + exit 0 23:17:26 + on_exit 23:17:26 + rc=0 23:17:26 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 23:17:26 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:17:26 NAMES STATUS 23:17:26 policy-apex-pdp Up 2 minutes 23:17:26 policy-pap Up 2 minutes 23:17:26 kafka Up 2 minutes 23:17:26 grafana Up 2 minutes 23:17:26 policy-api Up 2 minutes 23:17:26 zookeeper Up 2 minutes 23:17:26 mariadb Up 2 minutes 23:17:26 prometheus Up 2 minutes 23:17:26 simulator Up 2 minutes 23:17:26 + docker_stats 23:17:26 ++ uname -s 23:17:26 + '[' Linux == Darwin ']' 23:17:26 + sh -c 'top -bn1 | head -3' 23:17:26 top - 23:17:26 up 7 min, 0 users, load average: 0.94, 1.51, 0.77 23:17:26 Tasks: 198 total, 2 running, 128 sleeping, 0 stopped, 0 zombie 23:17:26 %Cpu(s): 9.7 us, 1.9 sy, 0.0 ni, 82.8 id, 5.5 wa, 0.0 hi, 0.0 si, 0.1 st 23:17:26 + echo 23:17:26 23:17:26 + sh -c 'free -h' 23:17:26 total used free shared buff/cache available 23:17:26 Mem: 31G 2.7G 22G 1.3M 6.2G 28G 23:17:26 Swap: 1.0G 0B 1.0G 23:17:26 + echo 23:17:26 23:17:26 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:17:26 NAMES STATUS 23:17:26 policy-apex-pdp Up 2 minutes 23:17:26 policy-pap Up 2 minutes 23:17:26 kafka Up 2 minutes 23:17:26 grafana Up 2 minutes 23:17:26 policy-api Up 2 minutes 23:17:26 zookeeper Up 2 minutes 23:17:26 mariadb Up 2 minutes 23:17:26 prometheus Up 2 minutes 23:17:26 simulator Up 2 minutes 23:17:26 + echo 23:17:26 23:17:26 + docker stats --no-stream 23:17:29 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:17:29 8b02d48ab3e1 policy-apex-pdp 0.41% 178.7MiB / 31.41GiB 0.56% 61kB / 97.3kB 0B / 0B 52 23:17:29 6be5e36accf0 policy-pap 0.38% 483.3MiB / 31.41GiB 1.50% 2.48MB / 1.04MB 0B / 149MB 67 23:17:29 12cfa2c6678f kafka 1.18% 398.1MiB / 31.41GiB 1.24% 264kB / 235kB 0B / 614kB 85 23:17:29 e4f073fe59f0 grafana 0.05% 58.96MiB / 31.41GiB 0.18% 19.6kB / 4.46kB 0B / 24.9MB 19 23:17:29 1f3a8f792c3a policy-api 0.08% 516.2MiB / 31.41GiB 1.60% 2.46MB / 1.1MB 0B / 0B 55 23:17:29 6ccbf7d53aa0 zookeeper 0.08% 101.9MiB / 31.41GiB 0.32% 67.4kB / 60.1kB 0B / 336kB 60 23:17:29 5e02a3d783a0 mariadb 0.02% 102.8MiB / 31.41GiB 0.32% 2.02MB / 4.88MB 10.9MB / 52.1MB 28 23:17:29 1eafbbd8d8d0 prometheus 0.06% 24.71MiB / 31.41GiB 0.08% 166kB / 10.9kB 0B / 0B 13 23:17:29 8e34b4219017 simulator 0.07% 121.8MiB / 31.41GiB 0.38% 1.5kB / 0B 127kB / 0B 78 23:17:29 + echo 23:17:29 23:17:29 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:17:29 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 23:17:29 + relax_set 23:17:29 + set +e 23:17:29 + set +o pipefail 23:17:29 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:17:29 ++ echo 'Shut down started!' 23:17:29 Shut down started! 23:17:29 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:29 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:17:29 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:17:29 ++ source export-ports.sh 23:17:29 ++ source get-versions.sh 23:17:31 ++ echo 'Collecting logs from docker compose containers...' 23:17:31 Collecting logs from docker compose containers... 23:17:31 ++ docker-compose logs 23:17:33 ++ cat docker_compose.log 23:17:33 Attaching to policy-apex-pdp, policy-pap, kafka, grafana, policy-api, policy-db-migrator, zookeeper, mariadb, prometheus, simulator 23:17:33 grafana | logger=settings t=2024-04-19T23:14:41.388151686Z level=info msg="Starting Grafana" version=10.4.2 commit=701c851be7a930e04fbc6ebb1cd4254da80edd4c branch=v10.4.x compiled=2024-04-19T23:14:41Z 23:17:33 grafana | logger=settings t=2024-04-19T23:14:41.388465307Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 23:17:33 grafana | logger=settings t=2024-04-19T23:14:41.388498937Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 23:17:33 grafana | logger=settings t=2024-04-19T23:14:41.388511578Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 23:17:33 grafana | logger=settings t=2024-04-19T23:14:41.388516708Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 23:17:33 grafana | logger=settings t=2024-04-19T23:14:41.388522698Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 23:17:33 grafana | logger=settings t=2024-04-19T23:14:41.388551688Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 23:17:33 grafana | logger=settings t=2024-04-19T23:14:41.388562648Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 23:17:33 grafana | logger=settings t=2024-04-19T23:14:41.388566578Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 23:17:33 grafana | logger=settings t=2024-04-19T23:14:41.388571038Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 23:17:33 grafana | logger=settings t=2024-04-19T23:14:41.388577598Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 23:17:33 grafana | logger=settings t=2024-04-19T23:14:41.388585458Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 23:17:33 grafana | logger=settings t=2024-04-19T23:14:41.388589068Z level=info msg=Target target=[all] 23:17:33 grafana | logger=settings t=2024-04-19T23:14:41.388598268Z level=info msg="Path Home" path=/usr/share/grafana 23:17:33 grafana | logger=settings t=2024-04-19T23:14:41.388602658Z level=info msg="Path Data" path=/var/lib/grafana 23:17:33 grafana | logger=settings t=2024-04-19T23:14:41.388607368Z level=info msg="Path Logs" path=/var/log/grafana 23:17:33 grafana | logger=settings t=2024-04-19T23:14:41.388639488Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 23:17:33 grafana | logger=settings t=2024-04-19T23:14:41.388643398Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 23:17:33 grafana | logger=settings t=2024-04-19T23:14:41.388646418Z level=info msg="App mode production" 23:17:33 grafana | logger=sqlstore t=2024-04-19T23:14:41.38901331Z level=info msg="Connecting to DB" dbtype=sqlite3 23:17:33 grafana | logger=sqlstore t=2024-04-19T23:14:41.38903673Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.389819094Z level=info msg="Starting DB migrations" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.39103076Z level=info msg="Executing migration" id="create migration_log table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.392016935Z level=info msg="Migration successfully executed" id="create migration_log table" duration=985.825µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.416589963Z level=info msg="Executing migration" id="create user table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.418045211Z level=info msg="Migration successfully executed" id="create user table" duration=1.448918ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.426294251Z level=info msg="Executing migration" id="add unique index user.login" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.427172224Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=877.853µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.437507615Z level=info msg="Executing migration" id="add unique index user.email" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.43873357Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.226135ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.446702429Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.447887174Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.178665ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.452703698Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.453551912Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=848.384µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.463359209Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.467539699Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.17971ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.474998935Z level=info msg="Executing migration" id="create user table v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.475916539Z level=info msg="Migration successfully executed" id="create user table v2" duration=917.234µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.480915004Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.48211269Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.197546ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.488802862Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.489919017Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.116165ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.496121458Z level=info msg="Executing migration" id="copy data_source v1 to v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.496865861Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=719.333µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.504241197Z level=info msg="Executing migration" id="Drop old table user_v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.505298522Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=1.061315ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.509689644Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.510892549Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.205936ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.5212928Z level=info msg="Executing migration" id="Update user table charset" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.52131945Z level=info msg="Migration successfully executed" id="Update user table charset" duration=27.62µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.526480995Z level=info msg="Executing migration" id="Add last_seen_at column to user" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.528256903Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.775038ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.532805675Z level=info msg="Executing migration" id="Add missing user data" 23:17:33 mariadb | 2024-04-19 23:14:34+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:17:33 mariadb | 2024-04-19 23:14:34+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 23:17:33 mariadb | 2024-04-19 23:14:34+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:17:33 mariadb | 2024-04-19 23:14:34+00:00 [Note] [Entrypoint]: Initializing database files 23:17:33 mariadb | 2024-04-19 23:14:35 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:17:33 mariadb | 2024-04-19 23:14:35 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:17:33 mariadb | 2024-04-19 23:14:35 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:17:33 mariadb | 23:17:33 mariadb | 23:17:33 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 23:17:33 mariadb | To do so, start the server, then issue the following command: 23:17:33 mariadb | 23:17:33 mariadb | '/usr/bin/mysql_secure_installation' 23:17:33 mariadb | 23:17:33 mariadb | which will also give you the option of removing the test 23:17:33 mariadb | databases and anonymous user created by default. This is 23:17:33 mariadb | strongly recommended for production servers. 23:17:33 mariadb | 23:17:33 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 23:17:33 mariadb | 23:17:33 mariadb | Please report any problems at https://mariadb.org/jira 23:17:33 mariadb | 23:17:33 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 23:17:33 mariadb | 23:17:33 mariadb | Consider joining MariaDB's strong and vibrant community: 23:17:33 mariadb | https://mariadb.org/get-involved/ 23:17:33 mariadb | 23:17:33 mariadb | 2024-04-19 23:14:37+00:00 [Note] [Entrypoint]: Database files initialized 23:17:33 mariadb | 2024-04-19 23:14:37+00:00 [Note] [Entrypoint]: Starting temporary server 23:17:33 mariadb | 2024-04-19 23:14:37+00:00 [Note] [Entrypoint]: Waiting for server startup 23:17:33 mariadb | 2024-04-19 23:14:37 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 99 ... 23:17:33 mariadb | 2024-04-19 23:14:37 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:17:33 mariadb | 2024-04-19 23:14:37 0 [Note] InnoDB: Number of transaction pools: 1 23:17:33 mariadb | 2024-04-19 23:14:37 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:17:33 mariadb | 2024-04-19 23:14:37 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.533485458Z level=info msg="Migration successfully executed" id="Add missing user data" duration=679.553µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.53793538Z level=info msg="Executing migration" id="Add is_disabled column to user" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.539238786Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.305786ms 23:17:33 kafka | ===> User 23:17:33 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:17:33 kafka | ===> Configuring ... 23:17:33 kafka | Running in Zookeeper mode... 23:17:33 kafka | ===> Running preflight checks ... 23:17:33 kafka | ===> Check if /var/lib/kafka/data is writable ... 23:17:33 kafka | ===> Check if Zookeeper is healthy ... 23:17:33 kafka | [2024-04-19 23:14:46,396] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:46,397] INFO Client environment:host.name=12cfa2c6678f (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:46,397] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:46,397] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:46,397] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:46,397] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.1-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.6.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.1-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.1-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.6.1.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.1.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.1-ccs.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.1-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.1-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:46,397] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:46,397] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:46,397] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:46,397] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:46,397] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:46,397] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:46,397] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:46,397] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:46,397] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:46,397] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:46,397] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:46,397] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:46,401] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@b7f23d9 (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:46,405] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:17:33 kafka | [2024-04-19 23:14:46,409] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:17:33 kafka | [2024-04-19 23:14:46,418] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:17:33 kafka | [2024-04-19 23:14:46,428] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) 23:17:33 kafka | [2024-04-19 23:14:46,429] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 23:17:33 kafka | [2024-04-19 23:14:46,443] INFO Socket connection established, initiating session, client: /172.17.0.9:47142, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) 23:17:33 kafka | [2024-04-19 23:14:46,624] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x1000004168b0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 23:17:33 kafka | [2024-04-19 23:14:46,755] INFO Session: 0x1000004168b0000 closed (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:46,755] INFO EventThread shut down for session: 0x1000004168b0000 (org.apache.zookeeper.ClientCnxn) 23:17:33 kafka | Using log4j config /etc/kafka/log4j.properties 23:17:33 kafka | ===> Launching ... 23:17:33 kafka | ===> Launching kafka ... 23:17:33 kafka | [2024-04-19 23:14:47,429] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 23:17:33 kafka | [2024-04-19 23:14:47,735] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:17:33 kafka | [2024-04-19 23:14:47,806] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 23:17:33 kafka | [2024-04-19 23:14:47,807] INFO starting (kafka.server.KafkaServer) 23:17:33 kafka | [2024-04-19 23:14:47,808] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 23:17:33 kafka | [2024-04-19 23:14:47,820] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 23:17:33 kafka | [2024-04-19 23:14:47,824] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:47,824] INFO Client environment:host.name=12cfa2c6678f (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:47,824] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:47,824] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:47,824] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:47,824] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:47,824] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:47,824] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:47,824] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:47,825] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:47,825] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:47,825] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:47,825] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:47,825] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:47,825] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:47,825] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:47,825] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:47,825] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:47,827] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@66746f57 (org.apache.zookeeper.ZooKeeper) 23:17:33 kafka | [2024-04-19 23:14:47,831] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:17:33 kafka | [2024-04-19 23:14:47,836] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:17:33 kafka | [2024-04-19 23:14:47,837] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 23:17:33 kafka | [2024-04-19 23:14:47,842] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) 23:17:33 kafka | [2024-04-19 23:14:47,848] INFO Socket connection established, initiating session, client: /172.17.0.9:47144, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) 23:17:33 kafka | [2024-04-19 23:14:47,858] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x1000004168b0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 23:17:33 kafka | [2024-04-19 23:14:47,862] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 23:17:33 kafka | [2024-04-19 23:14:48,677] INFO Cluster ID = pOvPZ_ZqQ6Wyt7DXYtLMbg (kafka.server.KafkaServer) 23:17:33 kafka | [2024-04-19 23:14:48,681] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 23:17:33 kafka | [2024-04-19 23:14:48,730] INFO KafkaConfig values: 23:17:33 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 23:17:33 kafka | alter.config.policy.class.name = null 23:17:33 kafka | alter.log.dirs.replication.quota.window.num = 11 23:17:33 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 23:17:33 kafka | authorizer.class.name = 23:17:33 kafka | auto.create.topics.enable = true 23:17:33 kafka | auto.include.jmx.reporter = true 23:17:33 kafka | auto.leader.rebalance.enable = true 23:17:33 kafka | background.threads = 10 23:17:33 kafka | broker.heartbeat.interval.ms = 2000 23:17:33 kafka | broker.id = 1 23:17:33 kafka | broker.id.generation.enable = true 23:17:33 kafka | broker.rack = null 23:17:33 kafka | broker.session.timeout.ms = 9000 23:17:33 kafka | client.quota.callback.class = null 23:17:33 kafka | compression.type = producer 23:17:33 kafka | connection.failed.authentication.delay.ms = 100 23:17:33 kafka | connections.max.idle.ms = 600000 23:17:33 kafka | connections.max.reauth.ms = 0 23:17:33 kafka | control.plane.listener.name = null 23:17:33 kafka | controlled.shutdown.enable = true 23:17:33 kafka | controlled.shutdown.max.retries = 3 23:17:33 kafka | controlled.shutdown.retry.backoff.ms = 5000 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.549910377Z level=info msg="Executing migration" id="Add index user.login/user.email" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.551623385Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.717738ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.557600255Z level=info msg="Executing migration" id="Add is_service_account column to user" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.559548974Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.945328ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.566669248Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.5752748Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=8.604912ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.582881597Z level=info msg="Executing migration" id="Add uid column to user" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.583982892Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.101005ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.591515409Z level=info msg="Executing migration" id="Update uid column values for users" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.591923741Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=407.752µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.601135225Z level=info msg="Executing migration" id="Add unique index user_uid" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.602296371Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.161026ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.613124602Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.613689876Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=564.724µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.618575029Z level=info msg="Executing migration" id="create temp user table v1-7" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.620069626Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.493877ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.626890019Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.627736974Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=841.815µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.630846759Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.632051624Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.204855ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.638450885Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.639643861Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.193146ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.646387533Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.647175348Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=787.645µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.654642113Z level=info msg="Executing migration" id="Update temp_user table charset" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.654684023Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=42.35µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.661011735Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.662105929Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.094414ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.669378565Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.670119908Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=746.633µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.675772616Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.676909451Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.137415ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.77191236Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.772743773Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=831.423µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.779595867Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.785022273Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=5.425746ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.793102552Z level=info msg="Executing migration" id="create temp_user v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.794058967Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=955.935µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.800832719Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.802213306Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.380297ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.808682357Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.809823533Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.141016ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.81750923Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.818773006Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.263596ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.826104851Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.827480458Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.376057ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.832460383Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.832949635Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=489.012µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.839411516Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.840451351Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=1.039385ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.846085718Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 23:17:33 kafka | controller.listener.names = null 23:17:33 kafka | controller.quorum.append.linger.ms = 25 23:17:33 kafka | controller.quorum.election.backoff.max.ms = 1000 23:17:33 kafka | controller.quorum.election.timeout.ms = 1000 23:17:33 kafka | controller.quorum.fetch.timeout.ms = 2000 23:17:33 kafka | controller.quorum.request.timeout.ms = 2000 23:17:33 kafka | controller.quorum.retry.backoff.ms = 20 23:17:33 kafka | controller.quorum.voters = [] 23:17:33 kafka | controller.quota.window.num = 11 23:17:33 kafka | controller.quota.window.size.seconds = 1 23:17:33 kafka | controller.socket.timeout.ms = 30000 23:17:33 kafka | create.topic.policy.class.name = null 23:17:33 kafka | default.replication.factor = 1 23:17:33 kafka | delegation.token.expiry.check.interval.ms = 3600000 23:17:33 kafka | delegation.token.expiry.time.ms = 86400000 23:17:33 kafka | delegation.token.master.key = null 23:17:33 kafka | delegation.token.max.lifetime.ms = 604800000 23:17:33 kafka | delegation.token.secret.key = null 23:17:33 kafka | delete.records.purgatory.purge.interval.requests = 1 23:17:33 kafka | delete.topic.enable = true 23:17:33 kafka | early.start.listeners = null 23:17:33 kafka | fetch.max.bytes = 57671680 23:17:33 kafka | fetch.purgatory.purge.interval.requests = 1000 23:17:33 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 23:17:33 kafka | group.consumer.heartbeat.interval.ms = 5000 23:17:33 kafka | group.consumer.max.heartbeat.interval.ms = 15000 23:17:33 kafka | group.consumer.max.session.timeout.ms = 60000 23:17:33 kafka | group.consumer.max.size = 2147483647 23:17:33 kafka | group.consumer.min.heartbeat.interval.ms = 5000 23:17:33 kafka | group.consumer.min.session.timeout.ms = 45000 23:17:33 kafka | group.consumer.session.timeout.ms = 45000 23:17:33 kafka | group.coordinator.new.enable = false 23:17:33 kafka | group.coordinator.threads = 1 23:17:33 kafka | group.initial.rebalance.delay.ms = 3000 23:17:33 kafka | group.max.session.timeout.ms = 1800000 23:17:33 kafka | group.max.size = 2147483647 23:17:33 kafka | group.min.session.timeout.ms = 6000 23:17:33 kafka | initial.broker.registration.timeout.ms = 60000 23:17:33 kafka | inter.broker.listener.name = PLAINTEXT 23:17:33 kafka | inter.broker.protocol.version = 3.6-IV2 23:17:33 kafka | kafka.metrics.polling.interval.secs = 10 23:17:33 kafka | kafka.metrics.reporters = [] 23:17:33 kafka | leader.imbalance.check.interval.seconds = 300 23:17:33 kafka | leader.imbalance.per.broker.percentage = 10 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.84654908Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=468.142µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.851901527Z level=info msg="Executing migration" id="create star table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.85258257Z level=info msg="Migration successfully executed" id="create star table" duration=680.473µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.86088267Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.865434252Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=4.551502ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.87357737Z level=info msg="Executing migration" id="create org table v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.874384755Z level=info msg="Migration successfully executed" id="create org table v1" duration=805.355µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.881932571Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.883207978Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.275197ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.890247571Z level=info msg="Executing migration" id="create org_user table v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.891059186Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=811.075µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.899000913Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.900478981Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.477608ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.905508635Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.90654908Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.039965ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.911940026Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.913165413Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.227617ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.919512623Z level=info msg="Executing migration" id="Update org table charset" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.919538023Z level=info msg="Migration successfully executed" id="Update org table charset" duration=26.22µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.926795608Z level=info msg="Executing migration" id="Update org_user table charset" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.926840358Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=46.33µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.933250439Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.933627401Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=376.442µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.938082772Z level=info msg="Executing migration" id="create dashboard table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.939020327Z level=info msg="Migration successfully executed" id="create dashboard table" duration=937.025µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.946231392Z level=info msg="Executing migration" id="add index dashboard.account_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.947705329Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.473867ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.955303056Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.95619025Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=886.944µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.961158824Z level=info msg="Executing migration" id="create dashboard_tag table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.962262919Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.103465ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.967641635Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.969142863Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.500568ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.981449472Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.982604307Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.159895ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.99145931Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:41.998780245Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=7.321775ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.007514724Z level=info msg="Executing migration" id="create dashboard v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.00833348Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=817.956µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.015747103Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.016982093Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.2348ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.025113303Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.026691524Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.578901ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.08297181Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 23:17:33 policy-apex-pdp | Waiting for mariadb port 3306... 23:17:33 policy-apex-pdp | mariadb (172.17.0.4:3306) open 23:17:33 policy-apex-pdp | Waiting for kafka port 9092... 23:17:33 policy-apex-pdp | kafka (172.17.0.9:9092) open 23:17:33 policy-apex-pdp | Waiting for pap port 6969... 23:17:33 policy-apex-pdp | pap (172.17.0.10:6969) open 23:17:33 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.355+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.619+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:17:33 policy-apex-pdp | allow.auto.create.topics = true 23:17:33 policy-apex-pdp | auto.commit.interval.ms = 5000 23:17:33 policy-apex-pdp | auto.include.jmx.reporter = true 23:17:33 policy-apex-pdp | auto.offset.reset = latest 23:17:33 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:17:33 policy-apex-pdp | check.crcs = true 23:17:33 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:17:33 policy-apex-pdp | client.id = consumer-f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f-1 23:17:33 policy-apex-pdp | client.rack = 23:17:33 policy-apex-pdp | connections.max.idle.ms = 540000 23:17:33 policy-apex-pdp | default.api.timeout.ms = 60000 23:17:33 policy-apex-pdp | enable.auto.commit = true 23:17:33 policy-apex-pdp | exclude.internal.topics = true 23:17:33 policy-apex-pdp | fetch.max.bytes = 52428800 23:17:33 policy-apex-pdp | fetch.max.wait.ms = 500 23:17:33 policy-apex-pdp | fetch.min.bytes = 1 23:17:33 policy-apex-pdp | group.id = f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f 23:17:33 policy-apex-pdp | group.instance.id = null 23:17:33 policy-apex-pdp | heartbeat.interval.ms = 3000 23:17:33 policy-apex-pdp | interceptor.classes = [] 23:17:33 policy-apex-pdp | internal.leave.group.on.close = true 23:17:33 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:17:33 policy-apex-pdp | isolation.level = read_uncommitted 23:17:33 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:33 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:17:33 policy-apex-pdp | max.poll.interval.ms = 300000 23:17:33 policy-apex-pdp | max.poll.records = 500 23:17:33 policy-apex-pdp | metadata.max.age.ms = 300000 23:17:33 policy-apex-pdp | metric.reporters = [] 23:17:33 policy-apex-pdp | metrics.num.samples = 2 23:17:33 policy-apex-pdp | metrics.recording.level = INFO 23:17:33 policy-apex-pdp | metrics.sample.window.ms = 30000 23:17:33 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:17:33 policy-apex-pdp | receive.buffer.bytes = 65536 23:17:33 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:17:33 policy-apex-pdp | reconnect.backoff.ms = 50 23:17:33 policy-apex-pdp | request.timeout.ms = 30000 23:17:33 policy-apex-pdp | retry.backoff.ms = 100 23:17:33 policy-apex-pdp | sasl.client.callback.handler.class = null 23:17:33 policy-apex-pdp | sasl.jaas.config = null 23:17:33 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:33 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:17:33 policy-apex-pdp | sasl.kerberos.service.name = null 23:17:33 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:33 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:33 policy-apex-pdp | sasl.login.callback.handler.class = null 23:17:33 policy-apex-pdp | sasl.login.class = null 23:17:33 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:17:33 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:17:33 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:17:33 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:17:33 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:17:33 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:17:33 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:17:33 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:17:33 policy-apex-pdp | sasl.mechanism = GSSAPI 23:17:33 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:17:33 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:17:33 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:17:33 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:33 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:33 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:33 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:17:33 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:17:33 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:17:33 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:17:33 policy-apex-pdp | security.protocol = PLAINTEXT 23:17:33 policy-apex-pdp | security.providers = null 23:17:33 policy-apex-pdp | send.buffer.bytes = 131072 23:17:33 policy-apex-pdp | session.timeout.ms = 45000 23:17:33 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:17:33 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:17:33 policy-apex-pdp | ssl.cipher.suites = null 23:17:33 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:33 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:17:33 policy-apex-pdp | ssl.engine.factory.class = null 23:17:33 policy-apex-pdp | ssl.key.password = null 23:17:33 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:17:33 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:17:33 policy-apex-pdp | ssl.keystore.key = null 23:17:33 policy-apex-pdp | ssl.keystore.location = null 23:17:33 policy-apex-pdp | ssl.keystore.password = null 23:17:33 policy-apex-pdp | ssl.keystore.type = JKS 23:17:33 policy-apex-pdp | ssl.protocol = TLSv1.3 23:17:33 policy-apex-pdp | ssl.provider = null 23:17:33 policy-apex-pdp | ssl.secure.random.implementation = null 23:17:33 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:17:33 policy-apex-pdp | ssl.truststore.certificates = null 23:17:33 policy-apex-pdp | ssl.truststore.location = null 23:17:33 policy-apex-pdp | ssl.truststore.password = null 23:17:33 policy-apex-pdp | ssl.truststore.type = JKS 23:17:33 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:33 policy-apex-pdp | 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.806+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.806+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.806+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713568536804 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.808+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f-1, groupId=f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f] Subscribed to topic(s): policy-pdp-pap 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.820+00:00|INFO|ServiceManager|main] service manager starting 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.821+00:00|INFO|ServiceManager|main] service manager starting topics 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.822+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.841+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:17:33 policy-apex-pdp | allow.auto.create.topics = true 23:17:33 policy-apex-pdp | auto.commit.interval.ms = 5000 23:17:33 policy-apex-pdp | auto.include.jmx.reporter = true 23:17:33 policy-apex-pdp | auto.offset.reset = latest 23:17:33 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:17:33 policy-apex-pdp | check.crcs = true 23:17:33 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:17:33 policy-apex-pdp | client.id = consumer-f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f-2 23:17:33 policy-apex-pdp | client.rack = 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.083555584Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=583.434µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.090546826Z level=info msg="Executing migration" id="drop table dashboard_v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.092158768Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.611282ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.103668924Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.103728134Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=59.8µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.119281279Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.12219147Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.909661ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.127226927Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.129113242Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.888705ms 23:17:33 policy-apex-pdp | connections.max.idle.ms = 540000 23:17:33 policy-apex-pdp | default.api.timeout.ms = 60000 23:17:33 policy-apex-pdp | enable.auto.commit = true 23:17:33 policy-apex-pdp | exclude.internal.topics = true 23:17:33 policy-apex-pdp | fetch.max.bytes = 52428800 23:17:33 policy-apex-pdp | fetch.max.wait.ms = 500 23:17:33 policy-apex-pdp | fetch.min.bytes = 1 23:17:33 policy-apex-pdp | group.id = f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f 23:17:33 policy-apex-pdp | group.instance.id = null 23:17:33 policy-apex-pdp | heartbeat.interval.ms = 3000 23:17:33 policy-apex-pdp | interceptor.classes = [] 23:17:33 policy-apex-pdp | internal.leave.group.on.close = true 23:17:33 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:17:33 policy-apex-pdp | isolation.level = read_uncommitted 23:17:33 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:33 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:17:33 policy-apex-pdp | max.poll.interval.ms = 300000 23:17:33 policy-apex-pdp | max.poll.records = 500 23:17:33 policy-apex-pdp | metadata.max.age.ms = 300000 23:17:33 policy-apex-pdp | metric.reporters = [] 23:17:33 policy-apex-pdp | metrics.num.samples = 2 23:17:33 policy-apex-pdp | metrics.recording.level = INFO 23:17:33 policy-apex-pdp | metrics.sample.window.ms = 30000 23:17:33 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:17:33 policy-apex-pdp | receive.buffer.bytes = 65536 23:17:33 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:17:33 policy-apex-pdp | reconnect.backoff.ms = 50 23:17:33 policy-apex-pdp | request.timeout.ms = 30000 23:17:33 policy-apex-pdp | retry.backoff.ms = 100 23:17:33 policy-apex-pdp | sasl.client.callback.handler.class = null 23:17:33 policy-apex-pdp | sasl.jaas.config = null 23:17:33 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:33 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:17:33 policy-apex-pdp | sasl.kerberos.service.name = null 23:17:33 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:33 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:33 policy-apex-pdp | sasl.login.callback.handler.class = null 23:17:33 policy-apex-pdp | sasl.login.class = null 23:17:33 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:17:33 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:17:33 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:17:33 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:17:33 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:17:33 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:17:33 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.136134393Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.137962027Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.827364ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.146136217Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.147013764Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=874.197µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.156889037Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.160137001Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=3.247244ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.167369284Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.169175107Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.797203ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.178204494Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.178971829Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=768.475µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.185421378Z level=info msg="Executing migration" id="Update dashboard table charset" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.185537469Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=117.211µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.199043549Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.19924257Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=198.471µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.208184976Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.211883213Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.697417ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.220348555Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.222950585Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.60016ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.229904867Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.2318688Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.963283ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.23718614Z level=info msg="Executing migration" id="Add column uid in dashboard" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.239180425Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.994055ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.245496741Z level=info msg="Executing migration" id="Update uid column values in dashboard" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.245806663Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=309.692µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.252483553Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.254031594Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.541971ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.26029907Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.261182718Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=884.048µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.278648947Z level=info msg="Executing migration" id="Update dashboard title length" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.278687517Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=40.4µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.285401166Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.286632846Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.23368ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.296051545Z level=info msg="Executing migration" id="create dashboard_provisioning" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.296822741Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=771.466µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.304636249Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.312612608Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=7.973549ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.317957118Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.318459861Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=502.553µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.323276018Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.324539637Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.263329ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.333865586Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.33851865Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=4.652484ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.39646859Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.397060354Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=592.824µs 23:17:33 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 23:17:33 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 23:17:33 kafka | log.cleaner.backoff.ms = 15000 23:17:33 kafka | log.cleaner.dedupe.buffer.size = 134217728 23:17:33 kafka | log.cleaner.delete.retention.ms = 86400000 23:17:33 kafka | log.cleaner.enable = true 23:17:33 kafka | log.cleaner.io.buffer.load.factor = 0.9 23:17:33 kafka | log.cleaner.io.buffer.size = 524288 23:17:33 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 23:17:33 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 23:17:33 kafka | log.cleaner.min.cleanable.ratio = 0.5 23:17:33 kafka | log.cleaner.min.compaction.lag.ms = 0 23:17:33 kafka | log.cleaner.threads = 1 23:17:33 kafka | log.cleanup.policy = [delete] 23:17:33 kafka | log.dir = /tmp/kafka-logs 23:17:33 kafka | log.dirs = /var/lib/kafka/data 23:17:33 kafka | log.flush.interval.messages = 9223372036854775807 23:17:33 kafka | log.flush.interval.ms = null 23:17:33 kafka | log.flush.offset.checkpoint.interval.ms = 60000 23:17:33 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 23:17:33 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 23:17:33 kafka | log.index.interval.bytes = 4096 23:17:33 kafka | log.index.size.max.bytes = 10485760 23:17:33 kafka | log.local.retention.bytes = -2 23:17:33 kafka | log.local.retention.ms = -2 23:17:33 kafka | log.message.downconversion.enable = true 23:17:33 kafka | log.message.format.version = 3.0-IV1 23:17:33 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 23:17:33 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 23:17:33 mariadb | 2024-04-19 23:14:37 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:17:33 mariadb | 2024-04-19 23:14:37 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:17:33 mariadb | 2024-04-19 23:14:37 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:17:33 mariadb | 2024-04-19 23:14:37 0 [Note] InnoDB: Completed initialization of buffer pool 23:17:33 mariadb | 2024-04-19 23:14:37 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:17:33 mariadb | 2024-04-19 23:14:37 0 [Note] InnoDB: 128 rollback segments are active. 23:17:33 mariadb | 2024-04-19 23:14:37 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:17:33 mariadb | 2024-04-19 23:14:37 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:17:33 mariadb | 2024-04-19 23:14:37 0 [Note] InnoDB: log sequence number 46590; transaction id 14 23:17:33 mariadb | 2024-04-19 23:14:37 0 [Note] Plugin 'FEEDBACK' is disabled. 23:17:33 mariadb | 2024-04-19 23:14:37 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:17:33 mariadb | 2024-04-19 23:14:37 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 23:17:33 mariadb | 2024-04-19 23:14:37 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 23:17:33 mariadb | 2024-04-19 23:14:37 0 [Note] mariadbd: ready for connections. 23:17:33 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 23:17:33 mariadb | 2024-04-19 23:14:38+00:00 [Note] [Entrypoint]: Temporary server started. 23:17:33 mariadb | 2024-04-19 23:14:41+00:00 [Note] [Entrypoint]: Creating user policy_user 23:17:33 mariadb | 2024-04-19 23:14:41+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 23:17:33 mariadb | 23:17:33 mariadb | 23:17:33 mariadb | 2024-04-19 23:14:41+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 23:17:33 mariadb | 2024-04-19 23:14:41+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 23:17:33 mariadb | #!/bin/bash -xv 23:17:33 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 23:17:33 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 23:17:33 mariadb | # 23:17:33 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 23:17:33 mariadb | # you may not use this file except in compliance with the License. 23:17:33 mariadb | # You may obtain a copy of the License at 23:17:33 mariadb | # 23:17:33 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 23:17:33 mariadb | # 23:17:33 mariadb | # Unless required by applicable law or agreed to in writing, software 23:17:33 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 23:17:33 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 23:17:33 mariadb | # See the License for the specific language governing permissions and 23:17:33 mariadb | # limitations under the License. 23:17:33 mariadb | 23:17:33 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:17:33 mariadb | do 23:17:33 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 23:17:33 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 23:17:33 mariadb | done 23:17:33 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:17:33 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 23:17:33 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:17:33 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:17:33 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 23:17:33 kafka | log.message.timestamp.type = CreateTime 23:17:33 kafka | log.preallocate = false 23:17:33 kafka | log.retention.bytes = -1 23:17:33 kafka | log.retention.check.interval.ms = 300000 23:17:33 kafka | log.retention.hours = 168 23:17:33 kafka | log.retention.minutes = null 23:17:33 kafka | log.retention.ms = null 23:17:33 kafka | log.roll.hours = 168 23:17:33 kafka | log.roll.jitter.hours = 0 23:17:33 kafka | log.roll.jitter.ms = null 23:17:33 kafka | log.roll.ms = null 23:17:33 kafka | log.segment.bytes = 1073741824 23:17:33 kafka | log.segment.delete.delay.ms = 60000 23:17:33 kafka | max.connection.creation.rate = 2147483647 23:17:33 kafka | max.connections = 2147483647 23:17:33 kafka | max.connections.per.ip = 2147483647 23:17:33 kafka | max.connections.per.ip.overrides = 23:17:33 kafka | max.incremental.fetch.session.cache.slots = 1000 23:17:33 kafka | message.max.bytes = 1048588 23:17:33 kafka | metadata.log.dir = null 23:17:33 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 23:17:33 kafka | metadata.log.max.snapshot.interval.ms = 3600000 23:17:33 kafka | metadata.log.segment.bytes = 1073741824 23:17:33 kafka | metadata.log.segment.min.bytes = 8388608 23:17:33 kafka | metadata.log.segment.ms = 604800000 23:17:33 kafka | metadata.max.idle.interval.ms = 500 23:17:33 kafka | metadata.max.retention.bytes = 104857600 23:17:33 kafka | metadata.max.retention.ms = 604800000 23:17:33 kafka | metric.reporters = [] 23:17:33 kafka | metrics.num.samples = 2 23:17:33 kafka | metrics.recording.level = INFO 23:17:33 kafka | metrics.sample.window.ms = 30000 23:17:33 kafka | min.insync.replicas = 1 23:17:33 kafka | node.id = 1 23:17:33 kafka | num.io.threads = 8 23:17:33 kafka | num.network.threads = 3 23:17:33 kafka | num.partitions = 1 23:17:33 kafka | num.recovery.threads.per.data.dir = 1 23:17:33 kafka | num.replica.alter.log.dirs.threads = null 23:17:33 kafka | num.replica.fetchers = 1 23:17:33 kafka | offset.metadata.max.bytes = 4096 23:17:33 kafka | offsets.commit.required.acks = -1 23:17:33 kafka | offsets.commit.timeout.ms = 5000 23:17:33 kafka | offsets.load.buffer.size = 5242880 23:17:33 kafka | offsets.retention.check.interval.ms = 600000 23:17:33 kafka | offsets.retention.minutes = 10080 23:17:33 kafka | offsets.topic.compression.codec = 0 23:17:33 kafka | offsets.topic.num.partitions = 50 23:17:33 kafka | offsets.topic.replication.factor = 1 23:17:33 kafka | offsets.topic.segment.bytes = 104857600 23:17:33 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 23:17:33 kafka | password.encoder.iterations = 4096 23:17:33 kafka | password.encoder.key.length = 128 23:17:33 kafka | password.encoder.keyfactory.algorithm = null 23:17:33 kafka | password.encoder.old.secret = null 23:17:33 kafka | password.encoder.secret = null 23:17:33 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 23:17:33 kafka | process.roles = [] 23:17:33 kafka | producer.id.expiration.check.interval.ms = 600000 23:17:33 kafka | producer.id.expiration.ms = 86400000 23:17:33 kafka | producer.purgatory.purge.interval.requests = 1000 23:17:33 kafka | queued.max.request.bytes = -1 23:17:33 kafka | queued.max.requests = 500 23:17:33 kafka | quota.window.num = 11 23:17:33 kafka | quota.window.size.seconds = 1 23:17:33 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 23:17:33 kafka | remote.log.manager.task.interval.ms = 30000 23:17:33 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 23:17:33 kafka | remote.log.manager.task.retry.backoff.ms = 500 23:17:33 kafka | remote.log.manager.task.retry.jitter = 0.2 23:17:33 kafka | remote.log.manager.thread.pool.size = 10 23:17:33 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 23:17:33 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 23:17:33 kafka | remote.log.metadata.manager.class.path = null 23:17:33 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 23:17:33 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 23:17:33 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:17:33 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:17:33 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 23:17:33 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:17:33 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:17:33 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 23:17:33 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:17:33 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:17:33 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 23:17:33 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:17:33 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:17:33 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 23:17:33 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:17:33 mariadb | 23:17:33 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 23:17:33 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 23:17:33 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 23:17:33 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 23:17:33 mariadb | 23:17:33 mariadb | 2024-04-19 23:14:42+00:00 [Note] [Entrypoint]: Stopping temporary server 23:17:33 mariadb | 2024-04-19 23:14:42 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 23:17:33 mariadb | 2024-04-19 23:14:42 0 [Note] InnoDB: FTS optimize thread exiting. 23:17:33 mariadb | 2024-04-19 23:14:42 0 [Note] InnoDB: Starting shutdown... 23:17:33 mariadb | 2024-04-19 23:14:42 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 23:17:33 mariadb | 2024-04-19 23:14:42 0 [Note] InnoDB: Buffer pool(s) dump completed at 240419 23:14:42 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Note] InnoDB: Shutdown completed; log sequence number 320798; transaction id 298 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Note] mariadbd: Shutdown complete 23:17:33 mariadb | 23:17:33 mariadb | 2024-04-19 23:14:43+00:00 [Note] [Entrypoint]: Temporary server stopped 23:17:33 mariadb | 23:17:33 mariadb | 2024-04-19 23:14:43+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 23:17:33 mariadb | 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Note] InnoDB: Number of transaction pools: 1 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Note] InnoDB: Completed initialization of buffer pool 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Note] InnoDB: 128 rollback segments are active. 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Note] InnoDB: log sequence number 320798; transaction id 299 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Note] Plugin 'FEEDBACK' is disabled. 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Note] Server socket created on IP: '0.0.0.0'. 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Note] Server socket created on IP: '::'. 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Note] mariadbd: ready for connections. 23:17:33 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 23:17:33 mariadb | 2024-04-19 23:14:43 0 [Note] InnoDB: Buffer pool(s) load completed at 240419 23:14:43 23:17:33 mariadb | 2024-04-19 23:14:43 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 23:17:33 mariadb | 2024-04-19 23:14:44 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) 23:17:33 mariadb | 2024-04-19 23:14:44 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) 23:17:33 mariadb | 2024-04-19 23:14:46 34 [Warning] Aborted connection 34 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 23:17:33 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:17:33 policy-apex-pdp | sasl.mechanism = GSSAPI 23:17:33 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:17:33 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:17:33 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:17:33 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:33 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:33 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:33 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:17:33 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:17:33 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:17:33 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:17:33 policy-apex-pdp | security.protocol = PLAINTEXT 23:17:33 policy-apex-pdp | security.providers = null 23:17:33 policy-apex-pdp | send.buffer.bytes = 131072 23:17:33 policy-apex-pdp | session.timeout.ms = 45000 23:17:33 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:17:33 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:17:33 policy-apex-pdp | ssl.cipher.suites = null 23:17:33 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:33 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:17:33 policy-apex-pdp | ssl.engine.factory.class = null 23:17:33 policy-apex-pdp | ssl.key.password = null 23:17:33 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:17:33 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:17:33 policy-apex-pdp | ssl.keystore.key = null 23:17:33 policy-apex-pdp | ssl.keystore.location = null 23:17:33 policy-apex-pdp | ssl.keystore.password = null 23:17:33 policy-apex-pdp | ssl.keystore.type = JKS 23:17:33 policy-apex-pdp | ssl.protocol = TLSv1.3 23:17:33 policy-apex-pdp | ssl.provider = null 23:17:33 policy-apex-pdp | ssl.secure.random.implementation = null 23:17:33 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:17:33 policy-apex-pdp | ssl.truststore.certificates = null 23:17:33 policy-apex-pdp | ssl.truststore.location = null 23:17:33 policy-apex-pdp | ssl.truststore.password = null 23:17:33 policy-apex-pdp | ssl.truststore.type = JKS 23:17:33 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:33 policy-apex-pdp | 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.849+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.849+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.849+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713568536849 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.849+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f-2, groupId=f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f] Subscribed to topic(s): policy-pdp-pap 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.850+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=585417c1-7e2b-4045-84fc-b0c77bef1ab2, alive=false, publisher=null]]: starting 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.879+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:17:33 policy-apex-pdp | acks = -1 23:17:33 policy-apex-pdp | auto.include.jmx.reporter = true 23:17:33 policy-apex-pdp | batch.size = 16384 23:17:33 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:17:33 policy-apex-pdp | buffer.memory = 33554432 23:17:33 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:17:33 kafka | remote.log.metadata.manager.listener.name = null 23:17:33 kafka | remote.log.reader.max.pending.tasks = 100 23:17:33 kafka | remote.log.reader.threads = 10 23:17:33 kafka | remote.log.storage.manager.class.name = null 23:17:33 kafka | remote.log.storage.manager.class.path = null 23:17:33 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 23:17:33 kafka | remote.log.storage.system.enable = false 23:17:33 kafka | replica.fetch.backoff.ms = 1000 23:17:33 kafka | replica.fetch.max.bytes = 1048576 23:17:33 kafka | replica.fetch.min.bytes = 1 23:17:33 kafka | replica.fetch.response.max.bytes = 10485760 23:17:33 kafka | replica.fetch.wait.max.ms = 500 23:17:33 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 23:17:33 kafka | replica.lag.time.max.ms = 30000 23:17:33 kafka | replica.selector.class = null 23:17:33 kafka | replica.socket.receive.buffer.bytes = 65536 23:17:33 kafka | replica.socket.timeout.ms = 30000 23:17:33 kafka | replication.quota.window.num = 11 23:17:33 kafka | replication.quota.window.size.seconds = 1 23:17:33 kafka | request.timeout.ms = 30000 23:17:33 kafka | reserved.broker.max.id = 1000 23:17:33 kafka | sasl.client.callback.handler.class = null 23:17:33 kafka | sasl.enabled.mechanisms = [GSSAPI] 23:17:33 kafka | sasl.jaas.config = null 23:17:33 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:33 kafka | sasl.kerberos.min.time.before.relogin = 60000 23:17:33 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 23:17:33 kafka | sasl.kerberos.service.name = null 23:17:33 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:33 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:33 kafka | sasl.login.callback.handler.class = null 23:17:33 kafka | sasl.login.class = null 23:17:33 kafka | sasl.login.connect.timeout.ms = null 23:17:33 kafka | sasl.login.read.timeout.ms = null 23:17:33 kafka | sasl.login.refresh.buffer.seconds = 300 23:17:33 kafka | sasl.login.refresh.min.period.seconds = 60 23:17:33 kafka | sasl.login.refresh.window.factor = 0.8 23:17:33 kafka | sasl.login.refresh.window.jitter = 0.05 23:17:33 kafka | sasl.login.retry.backoff.max.ms = 10000 23:17:33 kafka | sasl.login.retry.backoff.ms = 100 23:17:33 kafka | sasl.mechanism.controller.protocol = GSSAPI 23:17:33 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 23:17:33 kafka | sasl.oauthbearer.clock.skew.seconds = 30 23:17:33 kafka | sasl.oauthbearer.expected.audience = null 23:17:33 kafka | sasl.oauthbearer.expected.issuer = null 23:17:33 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:33 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:33 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:33 kafka | sasl.oauthbearer.jwks.endpoint.url = null 23:17:33 kafka | sasl.oauthbearer.scope.claim.name = scope 23:17:33 kafka | sasl.oauthbearer.sub.claim.name = sub 23:17:33 kafka | sasl.oauthbearer.token.endpoint.url = null 23:17:33 kafka | sasl.server.callback.handler.class = null 23:17:33 kafka | sasl.server.max.receive.size = 524288 23:17:33 kafka | security.inter.broker.protocol = PLAINTEXT 23:17:33 kafka | security.providers = null 23:17:33 kafka | server.max.startup.time.ms = 9223372036854775807 23:17:33 kafka | socket.connection.setup.timeout.max.ms = 30000 23:17:33 kafka | socket.connection.setup.timeout.ms = 10000 23:17:33 kafka | socket.listen.backlog.size = 50 23:17:33 kafka | socket.receive.buffer.bytes = 102400 23:17:33 policy-db-migrator | Waiting for mariadb port 3306... 23:17:33 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:17:33 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:17:33 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:17:33 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:17:33 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:17:33 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:17:33 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:17:33 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:17:33 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:17:33 policy-db-migrator | Connection to mariadb (172.17.0.4) 3306 port [tcp/mysql] succeeded! 23:17:33 policy-db-migrator | 321 blocks 23:17:33 policy-db-migrator | Preparing upgrade release version: 0800 23:17:33 policy-db-migrator | Preparing upgrade release version: 0900 23:17:33 policy-db-migrator | Preparing upgrade release version: 1000 23:17:33 policy-db-migrator | Preparing upgrade release version: 1100 23:17:33 policy-db-migrator | Preparing upgrade release version: 1200 23:17:33 policy-db-migrator | Preparing upgrade release version: 1300 23:17:33 policy-db-migrator | Done 23:17:33 policy-db-migrator | name version 23:17:33 policy-db-migrator | policyadmin 0 23:17:33 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 23:17:33 policy-db-migrator | upgrade: 0 -> 1300 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 kafka | socket.request.max.bytes = 104857600 23:17:33 kafka | socket.send.buffer.bytes = 102400 23:17:33 kafka | ssl.cipher.suites = [] 23:17:33 kafka | ssl.client.auth = none 23:17:33 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:33 kafka | ssl.endpoint.identification.algorithm = https 23:17:33 kafka | ssl.engine.factory.class = null 23:17:33 kafka | ssl.key.password = null 23:17:33 kafka | ssl.keymanager.algorithm = SunX509 23:17:33 kafka | ssl.keystore.certificate.chain = null 23:17:33 kafka | ssl.keystore.key = null 23:17:33 kafka | ssl.keystore.location = null 23:17:33 kafka | ssl.keystore.password = null 23:17:33 kafka | ssl.keystore.type = JKS 23:17:33 kafka | ssl.principal.mapping.rules = DEFAULT 23:17:33 kafka | ssl.protocol = TLSv1.3 23:17:33 kafka | ssl.provider = null 23:17:33 kafka | ssl.secure.random.implementation = null 23:17:33 kafka | ssl.trustmanager.algorithm = PKIX 23:17:33 kafka | ssl.truststore.certificates = null 23:17:33 kafka | ssl.truststore.location = null 23:17:33 kafka | ssl.truststore.password = null 23:17:33 kafka | ssl.truststore.type = JKS 23:17:33 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 23:17:33 kafka | transaction.max.timeout.ms = 900000 23:17:33 kafka | transaction.partition.verification.enable = true 23:17:33 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 23:17:33 kafka | transaction.state.log.load.buffer.size = 5242880 23:17:33 kafka | transaction.state.log.min.isr = 2 23:17:33 kafka | transaction.state.log.num.partitions = 50 23:17:33 kafka | transaction.state.log.replication.factor = 3 23:17:33 kafka | transaction.state.log.segment.bytes = 104857600 23:17:33 kafka | transactional.id.expiration.ms = 604800000 23:17:33 kafka | unclean.leader.election.enable = false 23:17:33 kafka | unstable.api.versions.enable = false 23:17:33 kafka | zookeeper.clientCnxnSocket = null 23:17:33 kafka | zookeeper.connect = zookeeper:2181 23:17:33 kafka | zookeeper.connection.timeout.ms = null 23:17:33 kafka | zookeeper.max.in.flight.requests = 10 23:17:33 kafka | zookeeper.metadata.migration.enable = false 23:17:33 kafka | zookeeper.metadata.migration.min.batch.size = 200 23:17:33 kafka | zookeeper.session.timeout.ms = 18000 23:17:33 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:33 prometheus | ts=2024-04-19T23:14:38.849Z caller=main.go:573 level=info msg="No time or size retention was set so using the default time retention" duration=15d 23:17:33 prometheus | ts=2024-04-19T23:14:38.849Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.2, branch=HEAD, revision=b4c0ab52c3e9b940ab803581ddae9b3d9a452337)" 23:17:33 prometheus | ts=2024-04-19T23:14:38.849Z caller=main.go:622 level=info build_context="(go=go1.22.2, platform=linux/amd64, user=root@b63f02a423d9, date=20240410-14:05:54, tags=netgo,builtinassets,stringlabels)" 23:17:33 prometheus | ts=2024-04-19T23:14:38.849Z caller=main.go:623 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 23:17:33 prometheus | ts=2024-04-19T23:14:38.849Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 23:17:33 prometheus | ts=2024-04-19T23:14:38.849Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 23:17:33 prometheus | ts=2024-04-19T23:14:38.852Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 23:17:33 prometheus | ts=2024-04-19T23:14:38.852Z caller=main.go:1129 level=info msg="Starting TSDB ..." 23:17:33 prometheus | ts=2024-04-19T23:14:38.857Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 23:17:33 prometheus | ts=2024-04-19T23:14:38.857Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 23:17:33 prometheus | ts=2024-04-19T23:14:38.864Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 23:17:33 prometheus | ts=2024-04-19T23:14:38.864Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.54µs 23:17:33 prometheus | ts=2024-04-19T23:14:38.864Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 23:17:33 prometheus | ts=2024-04-19T23:14:38.864Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 23:17:33 prometheus | ts=2024-04-19T23:14:38.864Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=108.54µs wal_replay_duration=342.922µs wbl_replay_duration=320ns total_replay_duration=579.433µs 23:17:33 prometheus | ts=2024-04-19T23:14:38.867Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 23:17:33 prometheus | ts=2024-04-19T23:14:38.867Z caller=main.go:1153 level=info msg="TSDB started" 23:17:33 prometheus | ts=2024-04-19T23:14:38.867Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 23:17:33 prometheus | ts=2024-04-19T23:14:38.867Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=832.643µs db_storage=1.45µs remote_storage=1.87µs web_handler=650ns query_engine=730ns scrape=176.881µs scrape_sd=94.73µs notify=24.55µs notify_sd=19.82µs rules=2.92µs tracing=6.05µs 23:17:33 prometheus | ts=2024-04-19T23:14:38.867Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 23:17:33 prometheus | ts=2024-04-19T23:14:38.868Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.406341254Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.407742284Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=1.39973ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.413364105Z level=info msg="Executing migration" id="Add check_sum column" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.416830661Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.466246ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.424607098Z level=info msg="Executing migration" id="Add index for dashboard_title" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.425451444Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=843.776µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.452576246Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.45310451Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=582.144µs 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | zookeeper.set.acl = false 23:17:33 kafka | zookeeper.ssl.cipher.suites = null 23:17:33 kafka | zookeeper.ssl.client.enable = false 23:17:33 kafka | zookeeper.ssl.crl.enable = false 23:17:33 kafka | zookeeper.ssl.enabled.protocols = null 23:17:33 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 23:17:33 kafka | zookeeper.ssl.keystore.location = null 23:17:33 kafka | zookeeper.ssl.keystore.password = null 23:17:33 kafka | zookeeper.ssl.keystore.type = null 23:17:33 kafka | zookeeper.ssl.ocsp.enable = false 23:17:33 kafka | zookeeper.ssl.protocol = TLSv1.2 23:17:33 kafka | zookeeper.ssl.truststore.location = null 23:17:33 kafka | zookeeper.ssl.truststore.password = null 23:17:33 kafka | zookeeper.ssl.truststore.type = null 23:17:33 kafka | (kafka.server.KafkaConfig) 23:17:33 kafka | [2024-04-19 23:14:48,760] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:17:33 kafka | [2024-04-19 23:14:48,760] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:17:33 kafka | [2024-04-19 23:14:48,763] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:17:33 kafka | [2024-04-19 23:14:48,765] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:17:33 kafka | [2024-04-19 23:14:48,810] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 23:17:33 kafka | [2024-04-19 23:14:48,818] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 23:17:33 kafka | [2024-04-19 23:14:48,827] INFO Loaded 0 logs in 16ms (kafka.log.LogManager) 23:17:33 kafka | [2024-04-19 23:14:48,828] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 23:17:33 kafka | [2024-04-19 23:14:48,829] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 23:17:33 kafka | [2024-04-19 23:14:48,839] INFO Starting the log cleaner (kafka.log.LogCleaner) 23:17:33 kafka | [2024-04-19 23:14:48,881] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 23:17:33 kafka | [2024-04-19 23:14:48,899] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 23:17:33 kafka | [2024-04-19 23:14:48,912] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 23:17:33 kafka | [2024-04-19 23:14:48,994] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:17:33 kafka | [2024-04-19 23:14:49,337] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:17:33 kafka | [2024-04-19 23:14:49,364] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 23:17:33 kafka | [2024-04-19 23:14:49,365] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:17:33 kafka | [2024-04-19 23:14:49,372] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 23:17:33 kafka | [2024-04-19 23:14:49,377] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:17:33 kafka | [2024-04-19 23:14:49,404] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:33 kafka | [2024-04-19 23:14:49,406] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:33 kafka | [2024-04-19 23:14:49,407] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:33 kafka | [2024-04-19 23:14:49,408] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:33 kafka | [2024-04-19 23:14:49,414] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:33 kafka | [2024-04-19 23:14:49,431] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 23:17:33 kafka | [2024-04-19 23:14:49,432] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 23:17:33 kafka | [2024-04-19 23:14:49,454] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 23:17:33 kafka | [2024-04-19 23:14:49,513] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1713568489466,1713568489466,1,0,0,72057611596005377,258,0,27 23:17:33 kafka | (kafka.zk.KafkaZkClient) 23:17:33 kafka | [2024-04-19 23:14:49,514] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 23:17:33 kafka | [2024-04-19 23:14:49,597] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 23:17:33 kafka | [2024-04-19 23:14:49,603] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:33 kafka | [2024-04-19 23:14:49,610] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:33 kafka | [2024-04-19 23:14:49,610] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 23:17:33 kafka | [2024-04-19 23:14:49,622] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 23:17:33 kafka | [2024-04-19 23:14:49,684] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.492022688Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.49232669Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=304.362µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.513193976Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.514711266Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.51686ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.535221739Z level=info msg="Executing migration" id="Add isPublic for dashboard" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.540931901Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=5.713482ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.601967463Z level=info msg="Executing migration" id="create data_source table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.603801197Z level=info msg="Migration successfully executed" id="create data_source table" duration=2.447878ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.724960155Z level=info msg="Executing migration" id="add index data_source.account_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.726533707Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.576372ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.835960218Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.838047024Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=2.086596ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.89023373Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.891007446Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=774.586µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.930394568Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.931510386Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.116088ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.961783811Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:42.970289594Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=8.509303ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.050938722Z level=info msg="Executing migration" id="create data_source table v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.051737566Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=800.534µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.07141111Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.072010592Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=595.142µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.108765365Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.109388599Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=620.924µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.123305632Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.123695854Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=390.312µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.139836058Z level=info msg="Executing migration" id="Add column with_credentials" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.141486947Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.651059ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.152988157Z level=info msg="Executing migration" id="Add secure json data column" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.154608736Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=1.620469ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.164054145Z level=info msg="Executing migration" id="Update data_source table charset" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.164077315Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=24.06µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.190364753Z level=info msg="Executing migration" id="Update initial version to 1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.190528425Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=163.872µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.195175458Z level=info msg="Executing migration" id="Add read_only data column" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.196911017Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=1.734919ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.209358712Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.209518563Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=159.451µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.212991492Z level=info msg="Executing migration" id="Update json_data with nulls" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.213190903Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=198.891µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.226426551Z level=info msg="Executing migration" id="Add uid column" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.229128035Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.704694ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.233085236Z level=info msg="Executing migration" id="Update uid value" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.233271667Z level=info msg="Migration successfully executed" id="Update uid value" duration=186.721µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.245224949Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.246470265Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.245206ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.256667428Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.257837175Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.171047ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.266149438Z level=info msg="Executing migration" id="create api_key table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.266954472Z level=info msg="Migration successfully executed" id="create api_key table" duration=805.134µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.311226763Z level=info msg="Executing migration" id="add index api_key.account_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.311873506Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=645.203µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.317753076Z level=info msg="Executing migration" id="add index api_key.key" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.31848287Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=729.894µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.330268641Z level=info msg="Executing migration" id="add index api_key.account_id_name" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.330850925Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=582.084µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.34534224Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.346609026Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.268026ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.400451506Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.402028505Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.578829ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.410951511Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.411593584Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=641.963µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.421480366Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.428245721Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=6.761805ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.434221273Z level=info msg="Executing migration" id="create api_key table v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.434849386Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=627.703µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.449586252Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.450547547Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=961.145µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.490519286Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.491516001Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=998.525µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.498788489Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.499584273Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=795.774µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.516486931Z level=info msg="Executing migration" id="copy api_key v1 to v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.517185514Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=700.363µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.539832102Z level=info msg="Executing migration" id="Drop old table api_key_v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.541079539Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=1.246217ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.570597082Z level=info msg="Executing migration" id="Update api_key table charset" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.570637652Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=40.52µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.592532556Z level=info msg="Executing migration" id="Add expires to api_key table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.595574652Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=3.043326ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.632669285Z level=info msg="Executing migration" id="Add service account foreign key" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.637629031Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=4.959336ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.649920285Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.650175837Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=255.822µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.792179025Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.797098881Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=4.921956ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.833781372Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.838089655Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=4.309293ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.947152633Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:43.948528489Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=1.378636ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:44.22988093Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:44.230415533Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=536.563µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:44.379736537Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:44.380364349Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=626.682µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:44.595686057Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:44.596520951Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=836.944µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:44.947506344Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:44.948983831Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.477737ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.196307224Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.197789791Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.485327ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.591697205Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.591826775Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=132.72µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.713745914Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.713796775Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=55.151µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.733561939Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 23:17:33 kafka | [2024-04-19 23:14:49,685] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 23:17:33 kafka | [2024-04-19 23:14:49,696] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:14:49,701] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 23:17:33 kafka | [2024-04-19 23:14:49,701] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:14:49,704] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 23:17:33 kafka | [2024-04-19 23:14:49,704] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 23:17:33 kafka | [2024-04-19 23:14:49,706] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 23:17:33 kafka | [2024-04-19 23:14:49,735] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 23:17:33 kafka | [2024-04-19 23:14:49,735] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:14:49,739] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:33 kafka | [2024-04-19 23:14:49,742] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:14:49,745] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:14:49,749] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:14:49,761] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 23:17:33 kafka | [2024-04-19 23:14:49,773] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:14:49,778] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:14:49,784] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 23:17:33 kafka | [2024-04-19 23:14:49,784] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 23:17:33 kafka | [2024-04-19 23:14:49,788] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 23:17:33 kafka | [2024-04-19 23:14:49,791] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 23:17:33 kafka | [2024-04-19 23:14:49,795] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 23:17:33 kafka | [2024-04-19 23:14:49,799] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:14:49,799] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:14:49,801] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:14:49,801] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:14:49,802] INFO Kafka version: 7.6.1-ccs (org.apache.kafka.common.utils.AppInfoParser) 23:17:33 kafka | [2024-04-19 23:14:49,802] INFO Kafka commitId: 11e81ad2a49db00b1d2b8c731409cd09e563de67 (org.apache.kafka.common.utils.AppInfoParser) 23:17:33 kafka | [2024-04-19 23:14:49,802] INFO Kafka startTimeMs: 1713568489797 (org.apache.kafka.common.utils.AppInfoParser) 23:17:33 kafka | [2024-04-19 23:14:49,803] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 23:17:33 kafka | [2024-04-19 23:14:49,805] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:14:49,806] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:14:49,806] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:14:49,807] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 23:17:33 kafka | [2024-04-19 23:14:49,808] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:14:49,812] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 23:17:33 kafka | [2024-04-19 23:14:49,818] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 23:17:33 kafka | [2024-04-19 23:14:49,818] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 23:17:33 kafka | [2024-04-19 23:14:49,821] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 23:17:33 kafka | [2024-04-19 23:14:49,821] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 23:17:33 kafka | [2024-04-19 23:14:49,821] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 23:17:33 kafka | [2024-04-19 23:14:49,823] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 23:17:33 kafka | [2024-04-19 23:14:49,825] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 23:17:33 kafka | [2024-04-19 23:14:49,826] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:14:49,832] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 23:17:33 kafka | [2024-04-19 23:14:49,839] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:14:49,844] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:14:49,845] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:14:49,848] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 23:17:33 policy-api | Waiting for mariadb port 3306... 23:17:33 policy-api | mariadb (172.17.0.4:3306) open 23:17:33 policy-api | Waiting for policy-db-migrator port 6824... 23:17:33 policy-api | policy-db-migrator (172.17.0.6:6824) open 23:17:33 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 23:17:33 policy-api | 23:17:33 policy-api | . ____ _ __ _ _ 23:17:33 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:17:33 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:17:33 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:17:33 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 23:17:33 policy-api | =========|_|==============|___/=/_/_/_/ 23:17:33 policy-api | :: Spring Boot :: (v3.1.10) 23:17:33 policy-api | 23:17:33 policy-api | [2024-04-19T23:15:11.923+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 23:17:33 policy-api | [2024-04-19T23:15:11.980+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 44 (/app/api.jar started by policy in /opt/app/policy/api/bin) 23:17:33 policy-api | [2024-04-19T23:15:11.981+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 23:17:33 policy-api | [2024-04-19T23:15:13.980+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:17:33 policy-api | [2024-04-19T23:15:14.064+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 73 ms. Found 6 JPA repository interfaces. 23:17:33 policy-api | [2024-04-19T23:15:14.502+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:17:33 policy-api | [2024-04-19T23:15:14.503+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:17:33 policy-api | [2024-04-19T23:15:15.151+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:17:33 policy-api | [2024-04-19T23:15:15.161+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:17:33 policy-api | [2024-04-19T23:15:15.164+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:17:33 policy-api | [2024-04-19T23:15:15.164+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 23:17:33 policy-api | [2024-04-19T23:15:15.259+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 23:17:33 policy-api | [2024-04-19T23:15:15.260+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3209 ms 23:17:33 policy-api | [2024-04-19T23:15:15.680+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:17:33 policy-api | [2024-04-19T23:15:15.766+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final 23:17:33 policy-api | [2024-04-19T23:15:15.816+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:17:33 policy-api | [2024-04-19T23:15:16.126+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:17:33 policy-api | [2024-04-19T23:15:16.172+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:17:33 policy-api | [2024-04-19T23:15:16.281+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@1f0b3cfe 23:17:33 policy-api | [2024-04-19T23:15:16.284+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:17:33 kafka | [2024-04-19 23:14:49,850] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:14:49,872] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:14:49,901] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:14:49,910] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:17:33 kafka | [2024-04-19 23:14:49,982] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:17:33 kafka | [2024-04-19 23:14:54,873] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:14:54,874] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:15:35,951] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:17:33 kafka | [2024-04-19 23:15:36,025] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:17:33 kafka | [2024-04-19 23:15:36,031] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:15:36,120] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:15:36,275] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(zGHIClL6Qp6xpcj0YvaaWw),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:15:36,275] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:15:36,277] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,278] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,282] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,282] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,353] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,355] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,356] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,358] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,360] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,360] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,364] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,365] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-api | [2024-04-19T23:15:18.442+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:17:33 policy-api | [2024-04-19T23:15:18.447+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:17:33 policy-api | [2024-04-19T23:15:19.645+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 23:17:33 policy-api | [2024-04-19T23:15:20.849+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 23:17:33 policy-api | [2024-04-19T23:15:22.136+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:17:33 policy-api | [2024-04-19T23:15:22.414+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@6f54a7be, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@4c48ccc4, org.springframework.security.web.context.SecurityContextHolderFilter@2fcd0756, org.springframework.security.web.header.HeaderWriterFilter@20a4f67a, org.springframework.security.web.authentication.logout.LogoutFilter@567dc7d7, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@46270641, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@5b6fd32d, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@5e47e1f, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@4c32428a, org.springframework.security.web.access.ExceptionTranslationFilter@66741691, org.springframework.security.web.access.intercept.AuthorizationFilter@1d93bd2a] 23:17:33 policy-api | [2024-04-19T23:15:23.244+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:17:33 policy-api | [2024-04-19T23:15:23.350+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:17:33 policy-api | [2024-04-19T23:15:23.382+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 23:17:33 policy-api | [2024-04-19T23:15:23.402+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 12.175 seconds (process running for 12.818) 23:17:33 policy-api | [2024-04-19T23:15:39.918+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:17:33 policy-api | [2024-04-19T23:15:39.918+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 23:17:33 policy-api | [2024-04-19T23:15:39.919+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 23:17:33 policy-api | [2024-04-19T23:15:40.608+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: 23:17:33 policy-api | [] 23:17:33 policy-apex-pdp | client.id = producer-1 23:17:33 policy-apex-pdp | compression.type = none 23:17:33 policy-apex-pdp | connections.max.idle.ms = 540000 23:17:33 policy-apex-pdp | delivery.timeout.ms = 120000 23:17:33 policy-apex-pdp | enable.idempotence = true 23:17:33 policy-apex-pdp | interceptor.classes = [] 23:17:33 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:17:33 policy-apex-pdp | linger.ms = 0 23:17:33 policy-apex-pdp | max.block.ms = 60000 23:17:33 policy-apex-pdp | max.in.flight.requests.per.connection = 5 23:17:33 policy-apex-pdp | max.request.size = 1048576 23:17:33 policy-apex-pdp | metadata.max.age.ms = 300000 23:17:33 policy-apex-pdp | metadata.max.idle.ms = 300000 23:17:33 policy-apex-pdp | metric.reporters = [] 23:17:33 policy-apex-pdp | metrics.num.samples = 2 23:17:33 policy-apex-pdp | metrics.recording.level = INFO 23:17:33 policy-apex-pdp | metrics.sample.window.ms = 30000 23:17:33 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 23:17:33 policy-apex-pdp | partitioner.availability.timeout.ms = 0 23:17:33 policy-apex-pdp | partitioner.class = null 23:17:33 policy-apex-pdp | partitioner.ignore.keys = false 23:17:33 policy-apex-pdp | receive.buffer.bytes = 32768 23:17:33 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:17:33 policy-apex-pdp | reconnect.backoff.ms = 50 23:17:33 policy-apex-pdp | request.timeout.ms = 30000 23:17:33 policy-apex-pdp | retries = 2147483647 23:17:33 policy-apex-pdp | retry.backoff.ms = 100 23:17:33 policy-apex-pdp | sasl.client.callback.handler.class = null 23:17:33 policy-apex-pdp | sasl.jaas.config = null 23:17:33 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:33 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:17:33 policy-apex-pdp | sasl.kerberos.service.name = null 23:17:33 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:33 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:33 policy-apex-pdp | sasl.login.callback.handler.class = null 23:17:33 policy-apex-pdp | sasl.login.class = null 23:17:33 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:17:33 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:17:33 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:17:33 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:17:33 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:17:33 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:17:33 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:17:33 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:17:33 policy-apex-pdp | sasl.mechanism = GSSAPI 23:17:33 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:17:33 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:17:33 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:17:33 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:33 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:33 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:33 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:17:33 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:17:33 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:17:33 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0450-pdpgroup.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0470-pdp.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-apex-pdp | security.protocol = PLAINTEXT 23:17:33 policy-apex-pdp | security.providers = null 23:17:33 policy-apex-pdp | send.buffer.bytes = 131072 23:17:33 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:17:33 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:17:33 policy-apex-pdp | ssl.cipher.suites = null 23:17:33 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:33 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:17:33 policy-apex-pdp | ssl.engine.factory.class = null 23:17:33 policy-apex-pdp | ssl.key.password = null 23:17:33 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:17:33 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:17:33 policy-apex-pdp | ssl.keystore.key = null 23:17:33 policy-apex-pdp | ssl.keystore.location = null 23:17:33 policy-apex-pdp | ssl.keystore.password = null 23:17:33 policy-apex-pdp | ssl.keystore.type = JKS 23:17:33 policy-apex-pdp | ssl.protocol = TLSv1.3 23:17:33 policy-apex-pdp | ssl.provider = null 23:17:33 policy-apex-pdp | ssl.secure.random.implementation = null 23:17:33 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:17:33 policy-apex-pdp | ssl.truststore.certificates = null 23:17:33 policy-apex-pdp | ssl.truststore.location = null 23:17:33 policy-apex-pdp | ssl.truststore.password = null 23:17:33 policy-apex-pdp | ssl.truststore.type = JKS 23:17:33 policy-apex-pdp | transaction.timeout.ms = 60000 23:17:33 policy-apex-pdp | transactional.id = null 23:17:33 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:17:33 policy-apex-pdp | 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.888+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.904+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.904+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.904+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713568536904 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.904+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=585417c1-7e2b-4045-84fc-b0c77bef1ab2, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.904+00:00|INFO|ServiceManager|main] service manager starting set alive 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.904+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.906+00:00|INFO|ServiceManager|main] service manager starting topic sinks 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.906+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.907+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.907+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.907+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.907+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@607fbe09 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.907+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.908+00:00|INFO|ServiceManager|main] service manager starting Create REST server 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.918+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 23:17:33 policy-apex-pdp | [] 23:17:33 policy-apex-pdp | [2024-04-19T23:15:36.920+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:17:33 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"aeb11eb0-b039-4a0f-b5c8-8beeceb3c516","timestampMs":1713568536908,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup"} 23:17:33 policy-apex-pdp | [2024-04-19T23:15:37.131+00:00|INFO|ServiceManager|main] service manager starting Rest Server 23:17:33 policy-apex-pdp | [2024-04-19T23:15:37.132+00:00|INFO|ServiceManager|main] service manager starting 23:17:33 policy-apex-pdp | [2024-04-19T23:15:37.132+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 23:17:33 policy-apex-pdp | [2024-04-19T23:15:37.132+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5aabbb29{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@72c927f1{/,null,STOPPED}, connector=RestServerParameters@53ab0286{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:33 policy-apex-pdp | [2024-04-19T23:15:37.155+00:00|INFO|ServiceManager|main] service manager started 23:17:33 policy-apex-pdp | [2024-04-19T23:15:37.155+00:00|INFO|ServiceManager|main] service manager started 23:17:33 policy-apex-pdp | [2024-04-19T23:15:37.160+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:36,382] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(d-1cpAONRW2xQxhn5w0MHg),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:15:36,382] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 23:17:33 kafka | [2024-04-19 23:15:36,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,383] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,384] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,385] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,385] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,385] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,385] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,385] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,385] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,386] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,386] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,386] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,386] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,386] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,392] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,392] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 policy-apex-pdp | [2024-04-19T23:15:37.159+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5aabbb29{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@72c927f1{/,null,STOPPED}, connector=RestServerParameters@53ab0286{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:33 policy-apex-pdp | [2024-04-19T23:15:37.427+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f-2, groupId=f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f] Cluster ID: pOvPZ_ZqQ6Wyt7DXYtLMbg 23:17:33 policy-apex-pdp | [2024-04-19T23:15:37.427+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: pOvPZ_ZqQ6Wyt7DXYtLMbg 23:17:33 policy-apex-pdp | [2024-04-19T23:15:37.429+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 23:17:33 policy-apex-pdp | [2024-04-19T23:15:38.006+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 23:17:33 policy-apex-pdp | [2024-04-19T23:15:38.007+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 23:17:33 policy-apex-pdp | [2024-04-19T23:15:38.890+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f-2, groupId=f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:17:33 policy-apex-pdp | [2024-04-19T23:15:38.897+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f-2, groupId=f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f] (Re-)joining group 23:17:33 policy-apex-pdp | [2024-04-19T23:15:38.913+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f-2, groupId=f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f] Request joining group due to: need to re-join with the given member-id: consumer-f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f-2-7a050cc1-a6f7-4a93-9366-5da526afdbfc 23:17:33 policy-apex-pdp | [2024-04-19T23:15:38.913+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f-2, groupId=f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:17:33 policy-apex-pdp | [2024-04-19T23:15:38.913+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f-2, groupId=f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f] (Re-)joining group 23:17:33 policy-apex-pdp | [2024-04-19T23:15:41.944+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f-2, groupId=f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f] Successfully joined group with generation Generation{generationId=1, memberId='consumer-f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f-2-7a050cc1-a6f7-4a93-9366-5da526afdbfc', protocol='range'} 23:17:33 policy-apex-pdp | [2024-04-19T23:15:41.949+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f-2, groupId=f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f] Finished assignment for group at generation 1: {consumer-f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f-2-7a050cc1-a6f7-4a93-9366-5da526afdbfc=Assignment(partitions=[policy-pdp-pap-0])} 23:17:33 policy-apex-pdp | [2024-04-19T23:15:42.001+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f-2, groupId=f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f] Successfully synced group in generation Generation{generationId=1, memberId='consumer-f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f-2-7a050cc1-a6f7-4a93-9366-5da526afdbfc', protocol='range'} 23:17:33 policy-apex-pdp | [2024-04-19T23:15:42.002+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f-2, groupId=f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:17:33 policy-apex-pdp | [2024-04-19T23:15:42.003+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f-2, groupId=f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f] Adding newly assigned partitions: policy-pdp-pap-0 23:17:33 policy-apex-pdp | [2024-04-19T23:15:42.020+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f-2, groupId=f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f] Found no committed offset for partition policy-pdp-pap-0 23:17:33 policy-apex-pdp | [2024-04-19T23:15:42.037+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f-2, groupId=f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:17:33 policy-apex-pdp | [2024-04-19T23:15:56.179+00:00|INFO|RequestLog|qtp1863100050-33] 172.17.0.3 - policyadmin [19/Apr/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10643 "-" "Prometheus/2.51.2" 23:17:33 policy-apex-pdp | [2024-04-19T23:15:56.907+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:17:33 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4da61fd1-2308-426b-bac6-0f75d06b914a","timestampMs":1713568556907,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup"} 23:17:33 policy-apex-pdp | [2024-04-19T23:15:56.938+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:33 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4da61fd1-2308-426b-bac6-0f75d06b914a","timestampMs":1713568556907,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup"} 23:17:33 policy-apex-pdp | [2024-04-19T23:15:56.941+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:17:33 policy-apex-pdp | [2024-04-19T23:15:57.157+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:33 policy-apex-pdp | {"source":"pap-9050d70a-be46-4913-b10f-1628466553aa","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f2001688-7f3f-437b-bcaf-5fed37cf046d","timestampMs":1713568557051,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:33 policy-apex-pdp | [2024-04-19T23:15:57.172+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 23:17:33 policy-apex-pdp | [2024-04-19T23:15:57.172+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 23:17:33 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f8f61dde-e709-42de-91d5-428aefe19d78","timestampMs":1713568557172,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup"} 23:17:33 policy-apex-pdp | [2024-04-19T23:15:57.173+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:17:33 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f2001688-7f3f-437b-bcaf-5fed37cf046d","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"a319096f-d675-414b-8f81-04373f266b58","timestampMs":1713568557172,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:33 policy-apex-pdp | [2024-04-19T23:15:57.187+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:33 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f8f61dde-e709-42de-91d5-428aefe19d78","timestampMs":1713568557172,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup"} 23:17:33 policy-apex-pdp | [2024-04-19T23:15:57.187+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:17:33 policy-apex-pdp | [2024-04-19T23:15:57.187+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:33 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f2001688-7f3f-437b-bcaf-5fed37cf046d","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"a319096f-d675-414b-8f81-04373f266b58","timestampMs":1713568557172,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:33 policy-apex-pdp | [2024-04-19T23:15:57.187+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:17:33 policy-apex-pdp | [2024-04-19T23:15:57.215+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:33 policy-apex-pdp | {"source":"pap-9050d70a-be46-4913-b10f-1628466553aa","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"096fb3cf-23ce-4b31-a646-611cda5e34f2","timestampMs":1713568557053,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:33 policy-apex-pdp | [2024-04-19T23:15:57.217+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:17:33 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"096fb3cf-23ce-4b31-a646-611cda5e34f2","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"e9850bdf-6851-4267-9bfb-33502b76eae9","timestampMs":1713568557217,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:33 policy-apex-pdp | [2024-04-19T23:15:57.225+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:33 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"096fb3cf-23ce-4b31-a646-611cda5e34f2","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"e9850bdf-6851-4267-9bfb-33502b76eae9","timestampMs":1713568557217,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:33 policy-apex-pdp | [2024-04-19T23:15:57.226+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:17:33 policy-apex-pdp | [2024-04-19T23:15:57.405+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:33 policy-apex-pdp | {"source":"pap-9050d70a-be46-4913-b10f-1628466553aa","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"de469107-a803-46fd-b33d-5683cffa9592","timestampMs":1713568557321,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:33 policy-apex-pdp | [2024-04-19T23:15:57.407+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:17:33 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"de469107-a803-46fd-b33d-5683cffa9592","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"61ec1fb9-3764-402f-ac22-7d631f726aae","timestampMs":1713568557406,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:33 policy-apex-pdp | [2024-04-19T23:15:57.416+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:33 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"de469107-a803-46fd-b33d-5683cffa9592","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"61ec1fb9-3764-402f-ac22-7d631f726aae","timestampMs":1713568557406,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:33 policy-apex-pdp | [2024-04-19T23:15:57.417+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:17:33 policy-apex-pdp | [2024-04-19T23:16:56.080+00:00|INFO|RequestLog|qtp1863100050-28] 172.17.0.3 - policyadmin [19/Apr/2024:23:16:56 +0000] "GET /metrics HTTP/1.1" 200 10642 "-" "Prometheus/2.51.2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.737591699Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.03112ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.743234646Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.745836268Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.601362ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.750180418Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.750251708Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=72.64µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.753979216Z level=info msg="Executing migration" id="create quota table v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.75475358Z level=info msg="Migration successfully executed" id="create quota table v1" duration=776.304µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.765697482Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.767201659Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.506947ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.775036176Z level=info msg="Executing migration" id="Update quota table charset" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.775065736Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=33.06µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.81553589Z level=info msg="Executing migration" id="create plugin_setting table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.816449214Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=914.205µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.821719089Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.822762383Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.044424ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.827059774Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.83042207Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.361856ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.848995439Z level=info msg="Executing migration" id="Update plugin_setting table charset" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.849035959Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=44.18µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.88523374Z level=info msg="Executing migration" id="create session table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.886016864Z level=info msg="Migration successfully executed" id="create session table" duration=786.004µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.889820942Z level=info msg="Executing migration" id="Drop old table playlist table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.889903183Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=83.131µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.893498219Z level=info msg="Executing migration" id="Drop old table playlist_item table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.89357715Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=80.161µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.918157187Z level=info msg="Executing migration" id="create playlist table v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.919312183Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.155106ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.962489698Z level=info msg="Executing migration" id="create playlist item table v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.963854485Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.364797ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.972775647Z level=info msg="Executing migration" id="Update playlist table charset" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.972828037Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=53.24µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.979090677Z level=info msg="Executing migration" id="Update playlist_item table charset" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.979131217Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=42.43µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.984345252Z level=info msg="Executing migration" id="Add playlist column created_at" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.990846043Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=6.498541ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:45.995944747Z level=info msg="Executing migration" id="Add playlist column updated_at" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.001272553Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=5.326816ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.007456382Z level=info msg="Executing migration" id="drop preferences table v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.007576072Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=119.83µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.017991342Z level=info msg="Executing migration" id="drop preferences table v3" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.018159603Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=169.601µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.024979595Z level=info msg="Executing migration" id="create preferences table v3" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.02585817Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=879.255µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.031108235Z level=info msg="Executing migration" id="Update preferences table charset" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.031135055Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=28.25µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.036317528Z level=info msg="Executing migration" id="Add column team_id in preferences" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.039701955Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.383497ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.048220566Z level=info msg="Executing migration" id="Update team_id column values in preferences" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.048439987Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=192.621µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.056287215Z level=info msg="Executing migration" id="Add column week_start in preferences" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.059428359Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.140524ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.064787135Z level=info msg="Executing migration" id="Add column preferences.json_data" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.06813503Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.347235ms 23:17:33 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.07639714Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.0765195Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=121.8µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.081423054Z level=info msg="Executing migration" id="Add preferences index org_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.082451578Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.027544ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.086656058Z level=info msg="Executing migration" id="Add preferences index user_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.087689943Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.034125ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.093491551Z level=info msg="Executing migration" id="create alert table v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.094713116Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.221215ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.206995099Z level=info msg="Executing migration" id="add index alert org_id & id " 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.208586147Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.588038ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.34391698Z level=info msg="Executing migration" id="add index alert state" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.345243777Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.329747ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.439352823Z level=info msg="Executing migration" id="add index alert dashboard_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.440410799Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.061786ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.618621025Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.620497654Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.874489ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.629111635Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.630540763Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.425888ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.636668801Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.637413494Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=744.333µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.641746126Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.651508842Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=9.760606ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.65533078Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.655985823Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=655.844µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.686448137Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.688361277Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.91599ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.997893888Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:46.998682771Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=789.123µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.096585153Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.098415223Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=1.82841ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.164157247Z level=info msg="Executing migration" id="create alert_notification table v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.165115721Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=961.964µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.305778916Z level=info msg="Executing migration" id="Add column is_default" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.308495159Z level=info msg="Migration successfully executed" id="Add column is_default" duration=2.717823ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.404864654Z level=info msg="Executing migration" id="Add column frequency" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.407489487Z level=info msg="Migration successfully executed" id="Add column frequency" duration=2.626633ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.511826092Z level=info msg="Executing migration" id="Add column send_reminder" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.514360484Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=2.532812ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.689113606Z level=info msg="Executing migration" id="Add column disable_resolve_message" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.694388942Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=5.278126ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.717847818Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.718594872Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=749.444µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.727282775Z level=info msg="Executing migration" id="Update alert table charset" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.727323665Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=44.14µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.731223154Z level=info msg="Executing migration" id="Update alert_notification table charset" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.731243864Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=21.38µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.736007108Z level=info msg="Executing migration" id="create notification_journal table v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.736813521Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=806.783µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.741777506Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.742896032Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.118316ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.750113877Z level=info msg="Executing migration" id="drop alert_notification_journal" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.751138063Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.028426ms 23:17:33 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 23:17:33 simulator | overriding logback.xml 23:17:33 simulator | 2024-04-19 23:14:38,235 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 23:17:33 simulator | 2024-04-19 23:14:38,294 INFO org.onap.policy.models.simulators starting 23:17:33 simulator | 2024-04-19 23:14:38,295 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 23:17:33 simulator | 2024-04-19 23:14:38,506 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 23:17:33 simulator | 2024-04-19 23:14:38,507 INFO org.onap.policy.models.simulators starting A&AI simulator 23:17:33 simulator | 2024-04-19 23:14:38,607 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:17:33 simulator | 2024-04-19 23:14:38,618 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:33 simulator | 2024-04-19 23:14:38,620 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:33 simulator | 2024-04-19 23:14:38,625 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:17:33 simulator | 2024-04-19 23:14:38,716 INFO Session workerName=node0 23:17:33 simulator | 2024-04-19 23:14:39,236 INFO Using GSON for REST calls 23:17:33 simulator | 2024-04-19 23:14:39,316 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} 23:17:33 simulator | 2024-04-19 23:14:39,327 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 23:17:33 simulator | 2024-04-19 23:14:39,335 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1581ms 23:17:33 simulator | 2024-04-19 23:14:39,335 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4285 ms. 23:17:33 simulator | 2024-04-19 23:14:39,340 INFO org.onap.policy.models.simulators starting SDNC simulator 23:17:33 simulator | 2024-04-19 23:14:39,353 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:17:33 simulator | 2024-04-19 23:14:39,353 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:33 simulator | 2024-04-19 23:14:39,354 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:33 simulator | 2024-04-19 23:14:39,356 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:17:33 simulator | 2024-04-19 23:14:39,362 INFO Session workerName=node0 23:17:33 simulator | 2024-04-19 23:14:39,426 INFO Using GSON for REST calls 23:17:33 simulator | 2024-04-19 23:14:39,435 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} 23:17:33 simulator | 2024-04-19 23:14:39,438 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 23:17:33 simulator | 2024-04-19 23:14:39,439 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @1685ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.761332143Z level=info msg="Executing migration" id="create alert_notification_state table v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.762572969Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.239406ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.769365632Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.770661359Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.295847ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.778174826Z level=info msg="Executing migration" id="Add for to alert table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.782438797Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.262141ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.786357726Z level=info msg="Executing migration" id="Add column uid in alert_notification" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.791620222Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=5.263166ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.799598392Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.799866823Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=267.821µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.808799447Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.810074533Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.274846ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.814686656Z level=info msg="Executing migration" id="Remove unique index org_id_name" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.815972303Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.287577ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.825871851Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.829920461Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=4.04833ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.834481133Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.834551784Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=73.361µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.840304432Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.841200807Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=896.715µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.849144776Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.850307552Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.162606ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.854365721Z level=info msg="Executing migration" id="Drop old annotation table v4" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.854555282Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=189.071µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.857880189Z level=info msg="Executing migration" id="create annotation table v5" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.859048635Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.167766ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.866364791Z level=info msg="Executing migration" id="add index annotation 0 v3" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.867371546Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.006495ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.870695602Z level=info msg="Executing migration" id="add index annotation 1 v3" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.872034189Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.338367ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.876535791Z level=info msg="Executing migration" id="add index annotation 2 v3" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.8783627Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.827209ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.88836439Z level=info msg="Executing migration" id="add index annotation 3 v3" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.889490225Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.125485ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.958576036Z level=info msg="Executing migration" id="add index annotation 4 v3" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:47.960456645Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.879589ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:48.143248737Z level=info msg="Executing migration" id="Update annotation table charset" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:48.143291897Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=45.141µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:48.278923904Z level=info msg="Executing migration" id="Add column region_id to annotation table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:48.283515587Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.594953ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:48.467979115Z level=info msg="Executing migration" id="Drop category_id index" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:48.469898985Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.88219ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:48.554530562Z level=info msg="Executing migration" id="Add column tags to annotation table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:48.56215921Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=7.634488ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:48.665825391Z level=info msg="Executing migration" id="Create annotation_tag table v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:48.667284558Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=1.453748ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:48.686297101Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:48.68803597Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.742259ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:48.773945564Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:48.775519871Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.576257ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:48.892134616Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 23:17:33 simulator | 2024-04-19 23:14:39,439 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4915 ms. 23:17:33 simulator | 2024-04-19 23:14:39,459 INFO org.onap.policy.models.simulators starting SO simulator 23:17:33 simulator | 2024-04-19 23:14:39,461 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:17:33 simulator | 2024-04-19 23:14:39,462 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:33 simulator | 2024-04-19 23:14:39,462 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:33 simulator | 2024-04-19 23:14:39,463 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:17:33 simulator | 2024-04-19 23:14:39,465 INFO Session workerName=node0 23:17:33 simulator | 2024-04-19 23:14:39,513 INFO Using GSON for REST calls 23:17:33 simulator | 2024-04-19 23:14:39,525 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} 23:17:33 simulator | 2024-04-19 23:14:39,526 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 23:17:33 simulator | 2024-04-19 23:14:39,526 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @1772ms 23:17:33 simulator | 2024-04-19 23:14:39,526 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4936 ms. 23:17:33 simulator | 2024-04-19 23:14:39,527 INFO org.onap.policy.models.simulators starting VFC simulator 23:17:33 simulator | 2024-04-19 23:14:39,529 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:17:33 simulator | 2024-04-19 23:14:39,529 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:33 simulator | 2024-04-19 23:14:39,530 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:33 simulator | 2024-04-19 23:14:39,531 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:17:33 simulator | 2024-04-19 23:14:39,539 INFO Session workerName=node0 23:17:33 simulator | 2024-04-19 23:14:39,583 INFO Using GSON for REST calls 23:17:33 simulator | 2024-04-19 23:14:39,592 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} 23:17:33 simulator | 2024-04-19 23:14:39,593 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 23:17:33 simulator | 2024-04-19 23:14:39,593 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @1840ms 23:17:33 simulator | 2024-04-19 23:14:39,593 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4937 ms. 23:17:33 simulator | 2024-04-19 23:14:39,594 INFO org.onap.policy.models.simulators started 23:17:33 policy-pap | Waiting for mariadb port 3306... 23:17:33 policy-pap | mariadb (172.17.0.4:3306) open 23:17:33 policy-pap | Waiting for kafka port 9092... 23:17:33 policy-pap | kafka (172.17.0.9:9092) open 23:17:33 policy-pap | Waiting for api port 6969... 23:17:33 policy-pap | api (172.17.0.7:6969) open 23:17:33 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 23:17:33 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 23:17:33 policy-pap | 23:17:33 policy-pap | . ____ _ __ _ _ 23:17:33 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:17:33 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:17:33 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:17:33 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 23:17:33 policy-pap | =========|_|==============|___/=/_/_/_/ 23:17:33 policy-pap | :: Spring Boot :: (v3.1.10) 23:17:33 policy-pap | 23:17:33 policy-pap | [2024-04-19T23:15:26.253+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 23:17:33 policy-pap | [2024-04-19T23:15:26.328+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 52 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 23:17:33 policy-pap | [2024-04-19T23:15:26.329+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 23:17:33 policy-pap | [2024-04-19T23:15:28.287+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:17:33 policy-pap | [2024-04-19T23:15:28.374+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 79 ms. Found 7 JPA repository interfaces. 23:17:33 policy-pap | [2024-04-19T23:15:28.803+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:17:33 policy-pap | [2024-04-19T23:15:28.803+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:17:33 policy-pap | [2024-04-19T23:15:29.441+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:17:33 policy-pap | [2024-04-19T23:15:29.451+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:17:33 policy-pap | [2024-04-19T23:15:29.454+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:17:33 policy-pap | [2024-04-19T23:15:29.454+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 23:17:33 policy-pap | [2024-04-19T23:15:29.553+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 23:17:33 policy-pap | [2024-04-19T23:15:29.553+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3150 ms 23:17:33 policy-pap | [2024-04-19T23:15:29.965+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:17:33 policy-pap | [2024-04-19T23:15:30.018+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final 23:17:33 policy-pap | [2024-04-19T23:15:30.366+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:17:33 zookeeper | ===> User 23:17:33 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:17:33 zookeeper | ===> Configuring ... 23:17:33 zookeeper | ===> Running preflight checks ... 23:17:33 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 23:17:33 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 23:17:33 zookeeper | ===> Launching ... 23:17:33 zookeeper | ===> Launching zookeeper ... 23:17:33 zookeeper | [2024-04-19 23:14:43,250] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:33 zookeeper | [2024-04-19 23:14:43,256] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:33 zookeeper | [2024-04-19 23:14:43,256] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:33 zookeeper | [2024-04-19 23:14:43,256] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:33 zookeeper | [2024-04-19 23:14:43,256] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:33 zookeeper | [2024-04-19 23:14:43,258] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 23:17:33 zookeeper | [2024-04-19 23:14:43,258] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 23:17:33 zookeeper | [2024-04-19 23:14:43,258] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 23:17:33 zookeeper | [2024-04-19 23:14:43,258] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 23:17:33 zookeeper | [2024-04-19 23:14:43,259] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 23:17:33 zookeeper | [2024-04-19 23:14:43,260] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:33 zookeeper | [2024-04-19 23:14:43,260] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:33 zookeeper | [2024-04-19 23:14:43,260] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:33 zookeeper | [2024-04-19 23:14:43,260] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:33 zookeeper | [2024-04-19 23:14:43,260] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:33 zookeeper | [2024-04-19 23:14:43,260] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 23:17:33 zookeeper | [2024-04-19 23:14:43,271] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@77eca502 (org.apache.zookeeper.server.ServerMetrics) 23:17:33 zookeeper | [2024-04-19 23:14:43,274] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:17:33 zookeeper | [2024-04-19 23:14:43,274] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:17:33 zookeeper | [2024-04-19 23:14:43,276] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:17:33 zookeeper | [2024-04-19 23:14:43,284] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,284] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,284] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,284] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,284] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,284] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,284] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,284] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,285] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,285] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,286] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,286] INFO Server environment:host.name=6ccbf7d53aa0 (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,286] INFO Server environment:java.version=11.0.22 (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,286] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,286] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,286] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,286] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,286] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,286] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,286] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,286] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,286] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,286] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,286] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,287] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,287] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,287] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,287] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,287] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,287] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,287] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,287] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,287] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,287] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 kafka | [2024-04-19 23:15:36,392] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,393] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,393] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) 23:17:33 kafka | [2024-04-19 23:15:36,394] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,394] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,394] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,394] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,395] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,395] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,395] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,402] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,402] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,402] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,403] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,403] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,403] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,403] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,403] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,403] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,403] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,404] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,404] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,404] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,404] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,404] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,404] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,405] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,405] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,405] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,405] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,405] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,407] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,407] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,408] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,408] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,408] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,408] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,408] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,408] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,408] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 zookeeper | [2024-04-19 23:14:43,288] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,288] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 23:17:33 zookeeper | [2024-04-19 23:14:43,289] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,289] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,290] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:17:33 zookeeper | [2024-04-19 23:14:43,290] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:17:33 zookeeper | [2024-04-19 23:14:43,291] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:17:33 zookeeper | [2024-04-19 23:14:43,291] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:17:33 zookeeper | [2024-04-19 23:14:43,291] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:17:33 zookeeper | [2024-04-19 23:14:43,291] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:17:33 policy-pap | [2024-04-19T23:15:30.466+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@1e0895f5 23:17:33 policy-pap | [2024-04-19T23:15:30.471+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:17:33 policy-pap | [2024-04-19T23:15:30.508+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect 23:17:33 policy-pap | [2024-04-19T23:15:31.939+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] 23:17:33 policy-pap | [2024-04-19T23:15:31.950+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:17:33 policy-pap | [2024-04-19T23:15:32.399+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 23:17:33 policy-pap | [2024-04-19T23:15:32.865+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 23:17:33 policy-pap | [2024-04-19T23:15:32.996+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 23:17:33 policy-pap | [2024-04-19T23:15:33.352+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:17:33 policy-pap | allow.auto.create.topics = true 23:17:33 policy-pap | auto.commit.interval.ms = 5000 23:17:33 policy-pap | auto.include.jmx.reporter = true 23:17:33 policy-pap | auto.offset.reset = latest 23:17:33 policy-pap | bootstrap.servers = [kafka:9092] 23:17:33 policy-pap | check.crcs = true 23:17:33 policy-pap | client.dns.lookup = use_all_dns_ips 23:17:33 policy-pap | client.id = consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-1 23:17:33 policy-pap | client.rack = 23:17:33 policy-pap | connections.max.idle.ms = 540000 23:17:33 policy-pap | default.api.timeout.ms = 60000 23:17:33 policy-pap | enable.auto.commit = true 23:17:33 policy-pap | exclude.internal.topics = true 23:17:33 policy-pap | fetch.max.bytes = 52428800 23:17:33 policy-pap | fetch.max.wait.ms = 500 23:17:33 policy-pap | fetch.min.bytes = 1 23:17:33 policy-pap | group.id = 8bb904e8-d607-4b4b-97e9-485d0625cc37 23:17:33 policy-pap | group.instance.id = null 23:17:33 policy-pap | heartbeat.interval.ms = 3000 23:17:33 policy-pap | interceptor.classes = [] 23:17:33 policy-pap | internal.leave.group.on.close = true 23:17:33 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:17:33 policy-pap | isolation.level = read_uncommitted 23:17:33 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:33 policy-pap | max.partition.fetch.bytes = 1048576 23:17:33 policy-pap | max.poll.interval.ms = 300000 23:17:33 policy-pap | max.poll.records = 500 23:17:33 policy-pap | metadata.max.age.ms = 300000 23:17:33 policy-pap | metric.reporters = [] 23:17:33 policy-pap | metrics.num.samples = 2 23:17:33 policy-pap | metrics.recording.level = INFO 23:17:33 policy-pap | metrics.sample.window.ms = 30000 23:17:33 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:17:33 policy-pap | receive.buffer.bytes = 65536 23:17:33 policy-pap | reconnect.backoff.max.ms = 1000 23:17:33 policy-pap | reconnect.backoff.ms = 50 23:17:33 policy-pap | request.timeout.ms = 30000 23:17:33 policy-pap | retry.backoff.ms = 100 23:17:33 policy-pap | sasl.client.callback.handler.class = null 23:17:33 policy-pap | sasl.jaas.config = null 23:17:33 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:33 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:17:33 policy-pap | sasl.kerberos.service.name = null 23:17:33 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:33 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:33 policy-pap | sasl.login.callback.handler.class = null 23:17:33 policy-pap | sasl.login.class = null 23:17:33 policy-pap | sasl.login.connect.timeout.ms = null 23:17:33 policy-pap | sasl.login.read.timeout.ms = null 23:17:33 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:17:33 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:17:33 policy-pap | sasl.login.refresh.window.factor = 0.8 23:17:33 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:17:33 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:17:33 policy-pap | sasl.login.retry.backoff.ms = 100 23:17:33 policy-pap | sasl.mechanism = GSSAPI 23:17:33 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:17:33 policy-pap | sasl.oauthbearer.expected.audience = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:48.90126503Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=9.132774ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.065799941Z level=info msg="Executing migration" id="Create annotation_tag table v3" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.067170757Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=1.374126ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.074622914Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.076054982Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.435418ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.084811725Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.085279897Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=469.442µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.089861529Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.090779044Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=919.435µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.19162603Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.192135542Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=512.822µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.227822618Z level=info msg="Executing migration" id="Add created time to annotation table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.235069873Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=7.245755ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.336492692Z level=info msg="Executing migration" id="Add updated time to annotation table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.340907264Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.415582ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.455685409Z level=info msg="Executing migration" id="Add index for created in annotation table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.457215897Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.533328ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.538880419Z level=info msg="Executing migration" id="Add index for updated in annotation table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.539920083Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.042184ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.550180674Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.550443426Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=263.932µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.55531476Z level=info msg="Executing migration" id="Add epoch_end column" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.559328879Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.013579ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.564240574Z level=info msg="Executing migration" id="Add index for epoch_end" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.564953807Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=713.703µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.570251213Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.570429744Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=179.581µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.574590224Z level=info msg="Executing migration" id="Move region to single row" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.575053807Z level=info msg="Migration successfully executed" id="Move region to single row" duration=463.523µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.582654973Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.583559449Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=905.045µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.588315841Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.589112096Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=796.415µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.592566223Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.593396397Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=829.624µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.598919184Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.599904579Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=985.375µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.603909509Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.604721552Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=816.293µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.608903983Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.61041899Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.523127ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.680936817Z level=info msg="Executing migration" id="Increase tags column to length 4096" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.681064388Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=129.42µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.689171317Z level=info msg="Executing migration" id="create test_data table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.690321154Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.192207ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.696627485Z level=info msg="Executing migration" id="create dashboard_version table v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.697419578Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=792.043µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.702080101Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.702975025Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=895.534µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.708807404Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 23:17:33 zookeeper | [2024-04-19 23:14:43,291] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:17:33 zookeeper | [2024-04-19 23:14:43,291] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:17:33 zookeeper | [2024-04-19 23:14:43,293] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,293] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,294] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 23:17:33 zookeeper | [2024-04-19 23:14:43,294] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 23:17:33 zookeeper | [2024-04-19 23:14:43,294] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,312] INFO Logging initialized @478ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 23:17:33 zookeeper | [2024-04-19 23:14:43,418] WARN o.e.j.s.ServletContextHandler@6d5620ce{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 23:17:33 zookeeper | [2024-04-19 23:14:43,418] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 23:17:33 zookeeper | [2024-04-19 23:14:43,435] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 11.0.22+7-LTS (org.eclipse.jetty.server.Server) 23:17:33 zookeeper | [2024-04-19 23:14:43,463] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 23:17:33 zookeeper | [2024-04-19 23:14:43,463] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 23:17:33 zookeeper | [2024-04-19 23:14:43,464] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) 23:17:33 zookeeper | [2024-04-19 23:14:43,467] WARN ServletContext@o.e.j.s.ServletContextHandler@6d5620ce{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 23:17:33 zookeeper | [2024-04-19 23:14:43,474] INFO Started o.e.j.s.ServletContextHandler@6d5620ce{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 23:17:33 zookeeper | [2024-04-19 23:14:43,487] INFO Started ServerConnector@4d1bf319{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 23:17:33 zookeeper | [2024-04-19 23:14:43,488] INFO Started @654ms (org.eclipse.jetty.server.Server) 23:17:33 zookeeper | [2024-04-19 23:14:43,488] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,492] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 23:17:33 zookeeper | [2024-04-19 23:14:43,492] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 23:17:33 zookeeper | [2024-04-19 23:14:43,494] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:17:33 zookeeper | [2024-04-19 23:14:43,495] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:17:33 zookeeper | [2024-04-19 23:14:43,507] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:17:33 zookeeper | [2024-04-19 23:14:43,507] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:17:33 zookeeper | [2024-04-19 23:14:43,508] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 23:17:33 zookeeper | [2024-04-19 23:14:43,508] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 23:17:33 zookeeper | [2024-04-19 23:14:43,512] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 23:17:33 zookeeper | [2024-04-19 23:14:43,512] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:17:33 zookeeper | [2024-04-19 23:14:43,515] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 23:17:33 zookeeper | [2024-04-19 23:14:43,516] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:17:33 zookeeper | [2024-04-19 23:14:43,516] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:17:33 zookeeper | [2024-04-19 23:14:43,524] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 23:17:33 zookeeper | [2024-04-19 23:14:43,525] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 23:17:33 zookeeper | [2024-04-19 23:14:43,537] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 23:17:33 zookeeper | [2024-04-19 23:14:43,538] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 23:17:33 zookeeper | [2024-04-19 23:14:46,467] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.709675509Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=868.035µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.714375792Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.714551592Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=176.24µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.719011295Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.719467677Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=455.352µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.726924734Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.727128625Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=204.951µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.73228897Z level=info msg="Executing migration" id="create team table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.732992123Z level=info msg="Migration successfully executed" id="create team table" duration=703.523µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.738652451Z level=info msg="Executing migration" id="add index team.org_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.740634031Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.98077ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.746116718Z level=info msg="Executing migration" id="add unique index team_org_id_name" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.747113163Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=995.155µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.751584885Z level=info msg="Executing migration" id="Add column uid in team" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.755957016Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.372001ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.761337743Z level=info msg="Executing migration" id="Update uid column values in team" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.761510784Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=173.271µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.765300923Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.766535639Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.233676ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.771748414Z level=info msg="Executing migration" id="create team member table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.7729374Z level=info msg="Migration successfully executed" id="create team member table" duration=1.188976ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.778037445Z level=info msg="Executing migration" id="add index team_member.org_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.778963359Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=925.174µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.783702103Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.784784079Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.081186ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.788663047Z level=info msg="Executing migration" id="add index team_member.team_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.789501371Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=838.284µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.793917354Z level=info msg="Executing migration" id="Add column email to team table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.798512666Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.594662ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.804315314Z level=info msg="Executing migration" id="Add column external to team_member table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.809339569Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=5.023715ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.816549205Z level=info msg="Executing migration" id="Add column permission to team_member table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.824094002Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=7.542657ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.829062016Z level=info msg="Executing migration" id="create dashboard acl table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.82969394Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=629.864µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.83588513Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.837131916Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.247506ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.841041486Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.842114301Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.072405ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.852554492Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.853518747Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=967.165µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.860144329Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.861061974Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=918.635µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.865783567Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.866730592Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=946.405µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.871727287Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.873135193Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.407746ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.878128548Z level=info msg="Executing migration" id="add index dashboard_permission" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.879084652Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=957.074µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.885086822Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.885780886Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=694.174µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.892275897Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 23:17:33 policy-pap | sasl.oauthbearer.expected.issuer = null 23:17:33 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:33 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:33 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:33 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:17:33 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:17:33 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:17:33 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:17:33 policy-pap | security.protocol = PLAINTEXT 23:17:33 policy-pap | security.providers = null 23:17:33 policy-pap | send.buffer.bytes = 131072 23:17:33 policy-pap | session.timeout.ms = 45000 23:17:33 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:17:33 policy-pap | socket.connection.setup.timeout.ms = 10000 23:17:33 policy-pap | ssl.cipher.suites = null 23:17:33 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:33 policy-pap | ssl.endpoint.identification.algorithm = https 23:17:33 policy-pap | ssl.engine.factory.class = null 23:17:33 policy-pap | ssl.key.password = null 23:17:33 policy-pap | ssl.keymanager.algorithm = SunX509 23:17:33 policy-pap | ssl.keystore.certificate.chain = null 23:17:33 policy-pap | ssl.keystore.key = null 23:17:33 policy-pap | ssl.keystore.location = null 23:17:33 policy-pap | ssl.keystore.password = null 23:17:33 policy-pap | ssl.keystore.type = JKS 23:17:33 policy-pap | ssl.protocol = TLSv1.3 23:17:33 policy-pap | ssl.provider = null 23:17:33 policy-pap | ssl.secure.random.implementation = null 23:17:33 policy-pap | ssl.trustmanager.algorithm = PKIX 23:17:33 policy-pap | ssl.truststore.certificates = null 23:17:33 policy-pap | ssl.truststore.location = null 23:17:33 policy-pap | ssl.truststore.password = null 23:17:33 policy-pap | ssl.truststore.type = JKS 23:17:33 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:33 policy-pap | 23:17:33 policy-pap | [2024-04-19T23:15:33.616+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:17:33 policy-pap | [2024-04-19T23:15:33.616+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:17:33 policy-pap | [2024-04-19T23:15:33.616+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713568533614 23:17:33 policy-pap | [2024-04-19T23:15:33.619+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-1, groupId=8bb904e8-d607-4b4b-97e9-485d0625cc37] Subscribed to topic(s): policy-pdp-pap 23:17:33 policy-pap | [2024-04-19T23:15:33.620+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:17:33 policy-pap | allow.auto.create.topics = true 23:17:33 policy-pap | auto.commit.interval.ms = 5000 23:17:33 policy-pap | auto.include.jmx.reporter = true 23:17:33 policy-pap | auto.offset.reset = latest 23:17:33 policy-pap | bootstrap.servers = [kafka:9092] 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.892987481Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=711.014µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.901793664Z level=info msg="Executing migration" id="create tag table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.902525658Z level=info msg="Migration successfully executed" id="create tag table" duration=732.244µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.94977339Z level=info msg="Executing migration" id="add index tag.key_value" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.951261258Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.490608ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.963296406Z level=info msg="Executing migration" id="create login attempt table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:49.964312602Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.011946ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:50.072596944Z level=info msg="Executing migration" id="add index login_attempt.username" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:50.074082981Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.488567ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:50.08612393Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:50.087504358Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.379308ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:50.123979677Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:50.141214341Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=17.235614ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:50.179335188Z level=info msg="Executing migration" id="create login_attempt v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:50.179892482Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=553.234µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:50.377469652Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:50.378722749Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.260057ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:50.499630383Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:50.500153005Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=522.192µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:50.603814655Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:50.60498628Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.174066ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:50.709275843Z level=info msg="Executing migration" id="create user auth table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:50.711969325Z level=info msg="Migration successfully executed" id="create user auth table" duration=2.693162ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:50.907249675Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:50.908822313Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.586448ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:50.989078027Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:50.989254518Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=177.501µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:51.222456443Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:51.230226431Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=7.771908ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:51.46024453Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:51.469618246Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=9.377396ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:51.82298149Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:51.832171765Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=9.188225ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:51.98177353Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:51.990309641Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=8.537321ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.09591426Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.097936219Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=2.025419ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.108547971Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.11444508Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.898659ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.123643294Z level=info msg="Executing migration" id="create server_lock table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.125114762Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.470608ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.13079845Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.131803294Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.005124ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.136313247Z level=info msg="Executing migration" id="create user auth token table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.137375472Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.061255ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.149479582Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0570-toscadatatype.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0630-toscanodetype.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.15121067Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.730858ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.164435215Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.166171914Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.736719ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.171267588Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.173038647Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.771129ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.199566487Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 23:17:33 kafka | [2024-04-19 23:15:36,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,409] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,410] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,411] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,412] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,412] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,412] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,481] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.209097093Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=9.527816ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.21446983Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.215287694Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=817.654µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.223436644Z level=info msg="Executing migration" id="create cache_data table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.225168243Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.730709ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.231051351Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.232682959Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.631608ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.237424622Z level=info msg="Executing migration" id="create short_url table v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.238419997Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=994.545µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.243409521Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.245348091Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.93552ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.253500022Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.253738213Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=238.021µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.259647401Z level=info msg="Executing migration" id="delete alert_definition table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.259938433Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=290.262µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.264303275Z level=info msg="Executing migration" id="recreate alert_definition table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.266360494Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=2.055599ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.270756446Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.271860101Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.102395ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.277553559Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.279295287Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.740988ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.284652584Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.284910685Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=257.341µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.290891444Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.291888049Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=997.415µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.298269241Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.299270705Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.003384ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.30425711Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.305958519Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.701098ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.31232006Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.314059197Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.738847ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.319594225Z level=info msg="Executing migration" id="Add column paused in alert_definition" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.329019362Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=9.417377ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.33478284Z level=info msg="Executing migration" id="drop alert_definition table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.335724964Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=942.304µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.343249851Z level=info msg="Executing migration" id="delete alert_definition_version table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.343636793Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=387.092µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.348135755Z level=info msg="Executing migration" id="recreate alert_definition_version table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.349852203Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.716158ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.354998699Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.35715702Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=2.164221ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.361997174Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.363143969Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.147116ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.418023868Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.418323519Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=297.431µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.456295205Z level=info msg="Executing migration" id="drop alert_definition_version table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.457975374Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.677769ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.463985233Z level=info msg="Executing migration" id="create alert_instance table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.465312819Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.326826ms 23:17:33 kafka | [2024-04-19 23:15:36,493] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 23:17:33 kafka | [2024-04-19 23:15:36,496] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 23:17:33 kafka | [2024-04-19 23:15:36,500] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 kafka | [2024-04-19 23:15:36,502] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(zGHIClL6Qp6xpcj0YvaaWw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,671] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,683] INFO [Broker id=1] Finished LeaderAndIsr request in 319ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,694] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=zGHIClL6Qp6xpcj0YvaaWw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,700] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,700] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:36,701] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,140] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,140] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,140] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,140] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,140] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,141] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,141] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,141] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,141] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,141] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,141] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,141] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,141] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,141] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,141] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,141] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 policy-pap | check.crcs = true 23:17:33 policy-pap | client.dns.lookup = use_all_dns_ips 23:17:33 policy-pap | client.id = consumer-policy-pap-2 23:17:33 policy-pap | client.rack = 23:17:33 policy-pap | connections.max.idle.ms = 540000 23:17:33 policy-pap | default.api.timeout.ms = 60000 23:17:33 policy-pap | enable.auto.commit = true 23:17:33 policy-pap | exclude.internal.topics = true 23:17:33 policy-pap | fetch.max.bytes = 52428800 23:17:33 policy-pap | fetch.max.wait.ms = 500 23:17:33 policy-pap | fetch.min.bytes = 1 23:17:33 policy-pap | group.id = policy-pap 23:17:33 policy-pap | group.instance.id = null 23:17:33 policy-pap | heartbeat.interval.ms = 3000 23:17:33 policy-pap | interceptor.classes = [] 23:17:33 policy-pap | internal.leave.group.on.close = true 23:17:33 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:17:33 policy-pap | isolation.level = read_uncommitted 23:17:33 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:33 policy-pap | max.partition.fetch.bytes = 1048576 23:17:33 policy-pap | max.poll.interval.ms = 300000 23:17:33 policy-pap | max.poll.records = 500 23:17:33 policy-pap | metadata.max.age.ms = 300000 23:17:33 policy-pap | metric.reporters = [] 23:17:33 policy-pap | metrics.num.samples = 2 23:17:33 policy-pap | metrics.recording.level = INFO 23:17:33 policy-pap | metrics.sample.window.ms = 30000 23:17:33 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:17:33 policy-pap | receive.buffer.bytes = 65536 23:17:33 policy-pap | reconnect.backoff.max.ms = 1000 23:17:33 policy-pap | reconnect.backoff.ms = 50 23:17:33 policy-pap | request.timeout.ms = 30000 23:17:33 policy-pap | retry.backoff.ms = 100 23:17:33 policy-pap | sasl.client.callback.handler.class = null 23:17:33 policy-pap | sasl.jaas.config = null 23:17:33 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:33 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:17:33 policy-pap | sasl.kerberos.service.name = null 23:17:33 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:33 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:33 policy-pap | sasl.login.callback.handler.class = null 23:17:33 policy-pap | sasl.login.class = null 23:17:33 policy-pap | sasl.login.connect.timeout.ms = null 23:17:33 policy-pap | sasl.login.read.timeout.ms = null 23:17:33 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:17:33 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:17:33 policy-pap | sasl.login.refresh.window.factor = 0.8 23:17:33 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.469224469Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.470381154Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.156485ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.481272057Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.482307682Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.035685ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.487075306Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.493041805Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.964029ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.506985443Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.508904683Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.91463ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.515098933Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.516187669Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.092226ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.520282199Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.546734958Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=26.451719ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.788168942Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:52.844158216Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=55.993864ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:53.399691737Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:53.401532046Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.842059ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:53.702363508Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:53.704233738Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.87351ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:53.756627623Z level=info msg="Executing migration" id="add current_reason column related to current_state" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:53.764773073Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=8.15054ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:53.901515483Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:53.912487217Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=10.973134ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:53.942910465Z level=info msg="Executing migration" id="create alert_rule table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:53.944479403Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.568957ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:53.95007564Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:53.951737958Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.662268ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:53.961447185Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:53.963092194Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.645589ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:53.967076613Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:53.968218699Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.141916ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:53.977288843Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:53.977387594Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=100.381µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:53.984107937Z level=info msg="Executing migration" id="add column for to alert_rule" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:53.993556913Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=9.452956ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:53.99715943Z level=info msg="Executing migration" id="add column annotations to alert_rule" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.001331362Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.171552ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.005344941Z level=info msg="Executing migration" id="add column labels to alert_rule" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.01136488Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.019029ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.018430835Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.019345859Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=916.484µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.025942831Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.027246658Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.302367ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.032563954Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.041713138Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=9.150334ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.047188935Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.052982353Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=5.791748ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.056172329Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.057131794Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=959.085µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.063397565Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.07276766Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=9.369265ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.076495429Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.082284257Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.788388ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.089498362Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.089602592Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=105.16µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.093941104Z level=info msg="Executing migration" id="create alert_rule_version table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.095648292Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.708768ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.123572638Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.125286577Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.713449ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.162981391Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.164273518Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.290207ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.288946046Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.289070057Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=123.701µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.303876501Z level=info msg="Executing migration" id="add column for to alert_rule_version" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.31404811Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=10.17467ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.412571822Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.422148378Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=9.578896ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.457430511Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.467797531Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=10.3638ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.533862134Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.539859493Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=5.999419ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.606316869Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0660-toscaparameter.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0670-toscapolicies.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0690-toscapolicy.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.612686849Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.36967ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.620694239Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.620818099Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=124.21µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.636990408Z level=info msg="Executing migration" id=create_alert_configuration_table 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.637798622Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=808.074µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.646335984Z level=info msg="Executing migration" id="Add column default in alert_configuration" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.652439423Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.103119ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.689051413Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.689116633Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=65.33µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.702544959Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.711980965Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=9.438426ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.745262697Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.749515549Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=4.252772ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.768193679Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.776672571Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=8.479952ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.800332327Z level=info msg="Executing migration" id=create_ngalert_configuration_table 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.801786783Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=1.450406ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.846991115Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.848775413Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.783358ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:54.993804072Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.002133713Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=8.332561ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.059625273Z level=info msg="Executing migration" id="create provenance_type table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.06089535Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=1.273827ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.129279033Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 23:17:33 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:17:33 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:17:33 policy-pap | sasl.login.retry.backoff.ms = 100 23:17:33 policy-pap | sasl.mechanism = GSSAPI 23:17:33 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:17:33 policy-pap | sasl.oauthbearer.expected.audience = null 23:17:33 policy-pap | sasl.oauthbearer.expected.issuer = null 23:17:33 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:33 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:33 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:33 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:17:33 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:17:33 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:17:33 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:17:33 policy-pap | security.protocol = PLAINTEXT 23:17:33 policy-pap | security.providers = null 23:17:33 policy-pap | send.buffer.bytes = 131072 23:17:33 policy-pap | session.timeout.ms = 45000 23:17:33 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:17:33 policy-pap | socket.connection.setup.timeout.ms = 10000 23:17:33 policy-pap | ssl.cipher.suites = null 23:17:33 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:33 policy-pap | ssl.endpoint.identification.algorithm = https 23:17:33 policy-pap | ssl.engine.factory.class = null 23:17:33 policy-pap | ssl.key.password = null 23:17:33 policy-pap | ssl.keymanager.algorithm = SunX509 23:17:33 policy-pap | ssl.keystore.certificate.chain = null 23:17:33 policy-pap | ssl.keystore.key = null 23:17:33 policy-pap | ssl.keystore.location = null 23:17:33 policy-pap | ssl.keystore.password = null 23:17:33 policy-pap | ssl.keystore.type = JKS 23:17:33 policy-pap | ssl.protocol = TLSv1.3 23:17:33 policy-pap | ssl.provider = null 23:17:33 policy-pap | ssl.secure.random.implementation = null 23:17:33 policy-pap | ssl.trustmanager.algorithm = PKIX 23:17:33 policy-pap | ssl.truststore.certificates = null 23:17:33 policy-pap | ssl.truststore.location = null 23:17:33 policy-pap | ssl.truststore.password = null 23:17:33 policy-pap | ssl.truststore.type = JKS 23:17:33 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:33 policy-pap | 23:17:33 policy-pap | [2024-04-19T23:15:33.628+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:17:33 policy-pap | [2024-04-19T23:15:33.628+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:17:33 policy-pap | [2024-04-19T23:15:33.628+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713568533628 23:17:33 policy-pap | [2024-04-19T23:15:33.628+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:17:33 policy-pap | [2024-04-19T23:15:34.082+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.131408434Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=2.130291ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.355007805Z level=info msg="Executing migration" id="create alert_image table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.356653104Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.644879ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.36618085Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.367854309Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.673389ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.481578413Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.481773034Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=199.481µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.617577948Z level=info msg="Executing migration" id=create_alert_configuration_history_table 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.619311196Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.734548ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.642851701Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.644488288Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.637617ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.712992744Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.713394945Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.770523294Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.770961456Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=437.972µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.798792992Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.800832602Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=2.04066ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.817492243Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.824590488Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=7.092475ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.925478201Z level=info msg="Executing migration" id="create library_element table v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.927382229Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.902388ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.974766391Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:55.976859401Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=2.09264ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.030352452Z level=info msg="Executing migration" id="create library_element_connection table v1" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.03179728Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.444008ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.063012832Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.065181992Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=2.17746ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.069954955Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.071245962Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.290367ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.12207846Z level=info msg="Executing migration" id="increase max description length to 2048" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.12212517Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=48.49µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.182549304Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.182778565Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=186.901µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.190935705Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.191499448Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=563.143µs 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.202845654Z level=info msg="Executing migration" id="create data_keys table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.204539781Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.695787ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.320483296Z level=info msg="Executing migration" id="create secrets table" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.321931063Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.447637ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.405011219Z level=info msg="Executing migration" id="rename data_keys name column to id" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.438727064Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=33.717315ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.467495773Z level=info msg="Executing migration" id="add name column into data_keys" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.478455527Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=10.960624ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.484029444Z level=info msg="Executing migration" id="copy data_keys id column values into name" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.484157835Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=128.311µs 23:17:33 policy-pap | [2024-04-19T23:15:34.270+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:17:33 policy-pap | [2024-04-19T23:15:34.520+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@78ea700f, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@cd93621, org.springframework.security.web.context.SecurityContextHolderFilter@18b58c77, org.springframework.security.web.header.HeaderWriterFilter@5ccc971e, org.springframework.security.web.authentication.logout.LogoutFilter@333a2df2, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@3051e476, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@3c20e9d6, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@42805abe, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@3b1137b0, org.springframework.security.web.access.ExceptionTranslationFilter@1f6d7e7c, org.springframework.security.web.access.intercept.AuthorizationFilter@31c0c7e5] 23:17:33 policy-pap | [2024-04-19T23:15:35.266+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:17:33 policy-pap | [2024-04-19T23:15:35.369+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:17:33 policy-pap | [2024-04-19T23:15:35.385+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 23:17:33 policy-pap | [2024-04-19T23:15:35.402+00:00|INFO|ServiceManager|main] Policy PAP starting 23:17:33 policy-pap | [2024-04-19T23:15:35.402+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 23:17:33 policy-pap | [2024-04-19T23:15:35.403+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 23:17:33 policy-pap | [2024-04-19T23:15:35.404+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 23:17:33 policy-pap | [2024-04-19T23:15:35.404+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 23:17:33 policy-pap | [2024-04-19T23:15:35.404+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 23:17:33 policy-pap | [2024-04-19T23:15:35.405+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 23:17:33 policy-pap | [2024-04-19T23:15:35.407+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=8bb904e8-d607-4b4b-97e9-485d0625cc37, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4271b748 23:17:33 policy-pap | [2024-04-19T23:15:35.419+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=8bb904e8-d607-4b4b-97e9-485d0625cc37, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:17:33 policy-pap | [2024-04-19T23:15:35.420+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:17:33 policy-pap | allow.auto.create.topics = true 23:17:33 policy-pap | auto.commit.interval.ms = 5000 23:17:33 policy-pap | auto.include.jmx.reporter = true 23:17:33 policy-pap | auto.offset.reset = latest 23:17:33 policy-pap | bootstrap.servers = [kafka:9092] 23:17:33 policy-pap | check.crcs = true 23:17:33 policy-pap | client.dns.lookup = use_all_dns_ips 23:17:33 policy-pap | client.id = consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3 23:17:33 policy-pap | client.rack = 23:17:33 policy-pap | connections.max.idle.ms = 540000 23:17:33 policy-pap | default.api.timeout.ms = 60000 23:17:33 policy-pap | enable.auto.commit = true 23:17:33 policy-pap | exclude.internal.topics = true 23:17:33 policy-pap | fetch.max.bytes = 52428800 23:17:33 policy-pap | fetch.max.wait.ms = 500 23:17:33 policy-pap | fetch.min.bytes = 1 23:17:33 policy-pap | group.id = 8bb904e8-d607-4b4b-97e9-485d0625cc37 23:17:33 policy-pap | group.instance.id = null 23:17:33 policy-pap | heartbeat.interval.ms = 3000 23:17:33 policy-pap | interceptor.classes = [] 23:17:33 policy-pap | internal.leave.group.on.close = true 23:17:33 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:17:33 policy-pap | isolation.level = read_uncommitted 23:17:33 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:33 policy-pap | max.partition.fetch.bytes = 1048576 23:17:33 policy-pap | max.poll.interval.ms = 300000 23:17:33 policy-pap | max.poll.records = 500 23:17:33 policy-pap | metadata.max.age.ms = 300000 23:17:33 policy-pap | metric.reporters = [] 23:17:33 policy-pap | metrics.num.samples = 2 23:17:33 policy-pap | metrics.recording.level = INFO 23:17:33 policy-pap | metrics.sample.window.ms = 30000 23:17:33 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:17:33 policy-pap | receive.buffer.bytes = 65536 23:17:33 policy-pap | reconnect.backoff.max.ms = 1000 23:17:33 policy-pap | reconnect.backoff.ms = 50 23:17:33 policy-pap | request.timeout.ms = 30000 23:17:33 policy-pap | retry.backoff.ms = 100 23:17:33 policy-pap | sasl.client.callback.handler.class = null 23:17:33 policy-pap | sasl.jaas.config = null 23:17:33 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:33 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:17:33 policy-pap | sasl.kerberos.service.name = null 23:17:33 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:33 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:33 policy-pap | sasl.login.callback.handler.class = null 23:17:33 policy-pap | sasl.login.class = null 23:17:33 kafka | [2024-04-19 23:15:37,141] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,141] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,141] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,141] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,141] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,141] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,141] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,141] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,142] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,142] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,142] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,142] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,142] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,142] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,142] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,143] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,144] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,144] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 23:17:33 policy-pap | sasl.login.connect.timeout.ms = null 23:17:33 policy-pap | sasl.login.read.timeout.ms = null 23:17:33 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:17:33 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:17:33 policy-pap | sasl.login.refresh.window.factor = 0.8 23:17:33 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:17:33 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:17:33 policy-pap | sasl.login.retry.backoff.ms = 100 23:17:33 policy-pap | sasl.mechanism = GSSAPI 23:17:33 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:17:33 policy-pap | sasl.oauthbearer.expected.audience = null 23:17:33 policy-pap | sasl.oauthbearer.expected.issuer = null 23:17:33 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:33 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:33 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:33 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:17:33 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:17:33 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:17:33 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:17:33 policy-pap | security.protocol = PLAINTEXT 23:17:33 policy-pap | security.providers = null 23:17:33 policy-pap | send.buffer.bytes = 131072 23:17:33 policy-pap | session.timeout.ms = 45000 23:17:33 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:17:33 policy-pap | socket.connection.setup.timeout.ms = 10000 23:17:33 policy-pap | ssl.cipher.suites = null 23:17:33 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:33 policy-pap | ssl.endpoint.identification.algorithm = https 23:17:33 policy-pap | ssl.engine.factory.class = null 23:17:33 policy-pap | ssl.key.password = null 23:17:33 policy-pap | ssl.keymanager.algorithm = SunX509 23:17:33 policy-pap | ssl.keystore.certificate.chain = null 23:17:33 policy-pap | ssl.keystore.key = null 23:17:33 policy-pap | ssl.keystore.location = null 23:17:33 policy-pap | ssl.keystore.password = null 23:17:33 policy-pap | ssl.keystore.type = JKS 23:17:33 policy-pap | ssl.protocol = TLSv1.3 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.5138424Z level=info msg="Executing migration" id="rename data_keys name column to label" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.551456513Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=37.614453ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.583332188Z level=info msg="Executing migration" id="rename data_keys id column back to name" 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.61853063Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=35.201332ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.693870277Z level=info msg="Executing migration" id="create kv_store table v1" 23:17:33 kafka | [2024-04-19 23:15:37,144] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,144] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,144] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,144] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,144] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,144] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,144] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,144] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,144] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,144] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,144] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,144] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,144] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,144] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,144] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,144] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,144] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,144] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,144] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,145] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,145] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,145] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,145] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,145] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,145] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,145] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,145] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,145] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,145] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,145] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,145] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,145] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,145] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,145] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,145] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,145] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,145] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,145] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,145] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,145] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,145] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,145] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,145] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,146] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,147] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,147] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,147] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,147] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,147] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,147] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,147] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,147] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,147] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,147] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,147] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,147] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 policy-pap | ssl.provider = null 23:17:33 policy-pap | ssl.secure.random.implementation = null 23:17:33 policy-pap | ssl.trustmanager.algorithm = PKIX 23:17:33 policy-pap | ssl.truststore.certificates = null 23:17:33 policy-pap | ssl.truststore.location = null 23:17:33 policy-pap | ssl.truststore.password = null 23:17:33 policy-pap | ssl.truststore.type = JKS 23:17:33 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:33 policy-pap | 23:17:33 policy-pap | [2024-04-19T23:15:35.426+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:17:33 policy-pap | [2024-04-19T23:15:35.427+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:17:33 policy-pap | [2024-04-19T23:15:35.427+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713568535426 23:17:33 policy-pap | [2024-04-19T23:15:35.427+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3, groupId=8bb904e8-d607-4b4b-97e9-485d0625cc37] Subscribed to topic(s): policy-pdp-pap 23:17:33 policy-pap | [2024-04-19T23:15:35.428+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 23:17:33 policy-pap | [2024-04-19T23:15:35.428+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=f267d0d1-c845-4830-85a0-4f1d9c6fdfd6, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4bc9451b 23:17:33 policy-pap | [2024-04-19T23:15:35.428+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=f267d0d1-c845-4830-85a0-4f1d9c6fdfd6, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:17:33 policy-pap | [2024-04-19T23:15:35.428+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:17:33 policy-pap | allow.auto.create.topics = true 23:17:33 policy-pap | auto.commit.interval.ms = 5000 23:17:33 policy-pap | auto.include.jmx.reporter = true 23:17:33 policy-pap | auto.offset.reset = latest 23:17:33 policy-pap | bootstrap.servers = [kafka:9092] 23:17:33 policy-pap | check.crcs = true 23:17:33 policy-pap | client.dns.lookup = use_all_dns_ips 23:17:33 policy-pap | client.id = consumer-policy-pap-4 23:17:33 policy-pap | client.rack = 23:17:33 policy-pap | connections.max.idle.ms = 540000 23:17:33 policy-pap | default.api.timeout.ms = 60000 23:17:33 policy-pap | enable.auto.commit = true 23:17:33 policy-pap | exclude.internal.topics = true 23:17:33 policy-pap | fetch.max.bytes = 52428800 23:17:33 policy-pap | fetch.max.wait.ms = 500 23:17:33 policy-pap | fetch.min.bytes = 1 23:17:33 policy-pap | group.id = policy-pap 23:17:33 policy-pap | group.instance.id = null 23:17:33 policy-pap | heartbeat.interval.ms = 3000 23:17:33 policy-pap | interceptor.classes = [] 23:17:33 policy-pap | internal.leave.group.on.close = true 23:17:33 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:17:33 policy-pap | isolation.level = read_uncommitted 23:17:33 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:33 policy-pap | max.partition.fetch.bytes = 1048576 23:17:33 policy-pap | max.poll.interval.ms = 300000 23:17:33 policy-pap | max.poll.records = 500 23:17:33 policy-pap | metadata.max.age.ms = 300000 23:17:33 policy-pap | metric.reporters = [] 23:17:33 policy-pap | metrics.num.samples = 2 23:17:33 policy-pap | metrics.recording.level = INFO 23:17:33 policy-pap | metrics.sample.window.ms = 30000 23:17:33 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:17:33 policy-pap | receive.buffer.bytes = 65536 23:17:33 policy-pap | reconnect.backoff.max.ms = 1000 23:17:33 policy-pap | reconnect.backoff.ms = 50 23:17:33 policy-pap | request.timeout.ms = 30000 23:17:33 kafka | [2024-04-19 23:15:37,147] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,147] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,148] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,148] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,148] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,148] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,148] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,148] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,148] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,148] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,148] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,148] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,148] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,148] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,148] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,148] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,148] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,148] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,148] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,148] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,148] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,148] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,148] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,149] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,149] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,149] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,149] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,149] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,149] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,149] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,149] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,149] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,149] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,150] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,151] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,151] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,151] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,151] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,152] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,152] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,152] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,153] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,152] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,154] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,154] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,154] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,154] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,154] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,154] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,154] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,154] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,154] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,154] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,154] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,154] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,154] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:37,154] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-db-migrator | > upgrade 0730-toscaproperty.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0770-toscarequirement.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0780-toscarequirements.sql 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-pap | retry.backoff.ms = 100 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.695417284Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.543427ms 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-pap | sasl.client.callback.handler.class = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.72729517Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-pap | sasl.jaas.config = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.729000078Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.705448ms 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.744712125Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 23:17:33 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.745257868Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=547.573µs 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-pap | sasl.kerberos.service.name = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.765128884Z level=info msg="Executing migration" id="create permission table" 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:33 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.767016674Z level=info msg="Migration successfully executed" id="create permission table" duration=1.89045ms 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | sasl.login.callback.handler.class = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.777228694Z level=info msg="Executing migration" id="add unique index permission.role_id" 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | sasl.login.class = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.77852515Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.324716ms 23:17:33 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.786022727Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-pap | sasl.login.connect.timeout.ms = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.788508729Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=2.485422ms 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.793034701Z level=info msg="Executing migration" id="create role table" 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 23:17:33 policy-pap | sasl.login.read.timeout.ms = null 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.794081026Z level=info msg="Migration successfully executed" id="create role table" duration=1.045715ms 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.799355302Z level=info msg="Executing migration" id="add column display_name" 23:17:33 policy-db-migrator | 23:17:33 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.808241455Z level=info msg="Migration successfully executed" id="add column display_name" duration=8.884643ms 23:17:33 policy-db-migrator | 23:17:33 policy-pap | sasl.login.refresh.window.factor = 0.8 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.814802527Z level=info msg="Executing migration" id="add column group_name" 23:17:33 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.821040517Z level=info msg="Migration successfully executed" id="add column group_name" duration=6.23757ms 23:17:33 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.824943257Z level=info msg="Executing migration" id="add index role.org_id" 23:17:33 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.826058162Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.116875ms 23:17:33 policy-pap | sasl.login.retry.backoff.ms = 100 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.830946986Z level=info msg="Executing migration" id="add unique index role_org_id_name" 23:17:33 policy-pap | sasl.mechanism = GSSAPI 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.832105601Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.158525ms 23:17:33 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.837039055Z level=info msg="Executing migration" id="add index role_org_id_uid" 23:17:33 policy-pap | sasl.oauthbearer.expected.audience = null 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.838633603Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.592678ms 23:17:33 policy-pap | sasl.oauthbearer.expected.issuer = null 23:17:33 policy-db-migrator | > upgrade 0820-toscatrigger.sql 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.844817023Z level=info msg="Executing migration" id="create team role table" 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.846180309Z level=info msg="Migration successfully executed" id="create team role table" duration=1.539437ms 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.904048762Z level=info msg="Executing migration" id="add index team_role.org_id" 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.906424154Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=2.374882ms 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.913129836Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.914638384Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.509188ms 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:17:33 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.919584057Z level=info msg="Executing migration" id="add index team_role.team_id" 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.920815364Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.231367ms 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-pap | security.protocol = PLAINTEXT 23:17:33 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.924916793Z level=info msg="Executing migration" id="create user role table" 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-pap | security.providers = null 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.925861099Z level=info msg="Migration successfully executed" id="create user role table" duration=943.796µs 23:17:33 kafka | [2024-04-19 23:15:37,155] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-pap | send.buffer.bytes = 131072 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.930529951Z level=info msg="Executing migration" id="add index user_role.org_id" 23:17:33 kafka | [2024-04-19 23:15:37,156] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-pap | session.timeout.ms = 45000 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.933499045Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=2.970304ms 23:17:33 kafka | [2024-04-19 23:15:37,156] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.94044599Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 23:17:33 kafka | [2024-04-19 23:15:37,156] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:17:33 policy-pap | socket.connection.setup.timeout.ms = 10000 23:17:33 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.941628205Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.182015ms 23:17:33 kafka | [2024-04-19 23:15:37,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:17:33 policy-pap | ssl.cipher.suites = null 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.946499819Z level=info msg="Executing migration" id="add index user_role.user_id" 23:17:33 kafka | [2024-04-19 23:15:37,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:17:33 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:33 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.947684504Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.186455ms 23:17:33 kafka | [2024-04-19 23:15:37,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:17:33 policy-pap | ssl.endpoint.identification.algorithm = https 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.953147091Z level=info msg="Executing migration" id="create builtin role table" 23:17:33 kafka | [2024-04-19 23:15:37,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:17:33 policy-pap | ssl.engine.factory.class = null 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.954295597Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.147386ms 23:17:33 kafka | [2024-04-19 23:15:37,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:17:33 policy-pap | ssl.key.password = null 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.958337097Z level=info msg="Executing migration" id="add index builtin_role.role_id" 23:17:33 kafka | [2024-04-19 23:15:37,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:17:33 policy-pap | ssl.keymanager.algorithm = SunX509 23:17:33 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.959568973Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.231946ms 23:17:33 kafka | [2024-04-19 23:15:37,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:17:33 policy-pap | ssl.keystore.certificate.chain = null 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.962908859Z level=info msg="Executing migration" id="add index builtin_role.name" 23:17:33 kafka | [2024-04-19 23:15:37,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:17:33 policy-pap | ssl.keystore.key = null 23:17:33 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.964121765Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.212656ms 23:17:33 kafka | [2024-04-19 23:15:37,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:17:33 policy-pap | ssl.keystore.location = null 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.969696152Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 23:17:33 kafka | [2024-04-19 23:15:37,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:17:33 policy-pap | ssl.keystore.password = null 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.978807357Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=9.109585ms 23:17:33 kafka | [2024-04-19 23:15:37,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:17:33 policy-pap | ssl.keystore.type = JKS 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.98373524Z level=info msg="Executing migration" id="add index builtin_role.org_id" 23:17:33 kafka | [2024-04-19 23:15:37,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:17:33 policy-pap | ssl.protocol = TLSv1.3 23:17:33 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.984918746Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.183856ms 23:17:33 kafka | [2024-04-19 23:15:37,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:17:33 policy-pap | ssl.provider = null 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.988756995Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 23:17:33 kafka | [2024-04-19 23:15:37,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:17:33 policy-pap | ssl.secure.random.implementation = null 23:17:33 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.989918531Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.161366ms 23:17:33 kafka | [2024-04-19 23:15:37,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:17:33 policy-pap | ssl.trustmanager.algorithm = PKIX 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.994587513Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 23:17:33 policy-pap | ssl.truststore.certificates = null 23:17:33 kafka | [2024-04-19 23:15:37,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.99580246Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.215097ms 23:17:33 policy-pap | ssl.truststore.location = null 23:17:33 kafka | [2024-04-19 23:15:37,180] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | ssl.truststore.password = null 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:56.999319067Z level=info msg="Executing migration" id="add unique index role.uid" 23:17:33 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.001175356Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.857489ms 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | ssl.truststore.type = JKS 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.005507157Z level=info msg="Executing migration" id="create seed assignment table" 23:17:33 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 23:17:33 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.007305126Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.797039ms 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.016100018Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 23:17:33 policy-db-migrator | 23:17:33 policy-pap | [2024-04-19T23:15:35.433+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.017405304Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.305646ms 23:17:33 policy-db-migrator | 23:17:33 policy-pap | [2024-04-19T23:15:35.433+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.021306554Z level=info msg="Executing migration" id="add column hidden to role table" 23:17:33 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.031985526Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=10.680092ms 23:17:33 policy-pap | [2024-04-19T23:15:35.433+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713568535433 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.035564833Z level=info msg="Executing migration" id="permission kind migration" 23:17:33 policy-pap | [2024-04-19T23:15:35.434+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:17:33 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.041334841Z level=info msg="Migration successfully executed" id="permission kind migration" duration=5.767978ms 23:17:33 policy-pap | [2024-04-19T23:15:35.434+00:00|INFO|ServiceManager|main] Policy PAP starting topics 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:17:33 policy-pap | [2024-04-19T23:15:35.434+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=f267d0d1-c845-4830-85a0-4f1d9c6fdfd6, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.046855148Z level=info msg="Executing migration" id="permission attribute migration" 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:17:33 policy-pap | [2024-04-19T23:15:35.435+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=8bb904e8-d607-4b4b-97e9-485d0625cc37, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.056151063Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=9.299565ms 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:17:33 policy-pap | [2024-04-19T23:15:35.435+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=09e4153a-ce4e-469f-82b3-ceca8064ff95, alive=false, publisher=null]]: starting 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.060919777Z level=info msg="Executing migration" id="permission identifier migration" 23:17:33 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:17:33 policy-pap | [2024-04-19T23:15:35.451+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.069425718Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.505141ms 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:17:33 policy-pap | acks = -1 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.073111646Z level=info msg="Executing migration" id="add permission identifier index" 23:17:33 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:17:33 policy-pap | auto.include.jmx.reporter = true 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.074419033Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.306946ms 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:17:33 policy-pap | batch.size = 16384 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.079486387Z level=info msg="Executing migration" id="add permission action scope role_id index" 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:17:33 policy-pap | bootstrap.servers = [kafka:9092] 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.080796873Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.310286ms 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:17:33 policy-pap | buffer.memory = 33554432 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.084382831Z level=info msg="Executing migration" id="remove permission role_id action scope index" 23:17:33 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:17:33 policy-pap | client.dns.lookup = use_all_dns_ips 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.085565487Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.182886ms 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:17:33 policy-pap | client.id = producer-1 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.091990988Z level=info msg="Executing migration" id="create query_history table v1" 23:17:33 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 23:17:33 policy-pap | compression.type = none 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.093069583Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.078214ms 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:17:33 policy-pap | connections.max.idle.ms = 540000 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.098443749Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:17:33 policy-pap | delivery.timeout.ms = 120000 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.099655595Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.212066ms 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:17:33 policy-pap | enable.idempotence = true 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.106022816Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 23:17:33 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:17:33 policy-pap | interceptor.classes = [] 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.106388408Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=368.662µs 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:17:33 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.11299829Z level=info msg="Executing migration" id="rbac disabled migrator" 23:17:33 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:17:33 policy-pap | linger.ms = 0 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.113117211Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=119.021µs 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:17:33 policy-pap | max.block.ms = 60000 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.116799399Z level=info msg="Executing migration" id="teams permissions migration" 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:17:33 policy-pap | max.in.flight.requests.per.connection = 5 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.117615082Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=816.073µs 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:17:33 policy-pap | max.request.size = 1048576 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.121863543Z level=info msg="Executing migration" id="dashboard permissions" 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:17:33 policy-pap | metadata.max.age.ms = 300000 23:17:33 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.122846678Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=985.135µs 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:17:33 policy-pap | metadata.max.idle.ms = 300000 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.128554586Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 23:17:33 kafka | [2024-04-19 23:15:37,181] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:17:33 policy-pap | metric.reporters = [] 23:17:33 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.129694901Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.140045ms 23:17:33 policy-pap | metrics.num.samples = 2 23:17:33 kafka | [2024-04-19 23:15:37,191] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.133740311Z level=info msg="Executing migration" id="drop managed folder create actions" 23:17:33 policy-pap | metrics.recording.level = INFO 23:17:33 kafka | [2024-04-19 23:15:37,194] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.134015643Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=274.511µs 23:17:33 policy-pap | metrics.sample.window.ms = 30000 23:17:33 kafka | [2024-04-19 23:15:37,203] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.140196292Z level=info msg="Executing migration" id="alerting notification permissions" 23:17:33 policy-pap | partitioner.adaptive.partitioning.enable = true 23:17:33 kafka | [2024-04-19 23:15:37,207] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.141220168Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=1.023646ms 23:17:33 policy-pap | partitioner.availability.timeout.ms = 0 23:17:33 kafka | [2024-04-19 23:15:37,208] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.149686439Z level=info msg="Executing migration" id="create query_history_star table v1" 23:17:33 policy-pap | partitioner.class = null 23:17:33 kafka | [2024-04-19 23:15:37,217] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.151307616Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.620207ms 23:17:33 policy-pap | partitioner.ignore.keys = false 23:17:33 kafka | [2024-04-19 23:15:37,217] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.169906977Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 23:17:33 policy-pap | receive.buffer.bytes = 32768 23:17:33 kafka | [2024-04-19 23:15:37,420] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.171368924Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.461747ms 23:17:33 policy-pap | reconnect.backoff.max.ms = 1000 23:17:33 kafka | [2024-04-19 23:15:37,420] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.233285055Z level=info msg="Executing migration" id="add column org_id in query_history_star" 23:17:33 policy-pap | reconnect.backoff.ms = 50 23:17:33 kafka | [2024-04-19 23:15:37,421] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.243926057Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=10.642512ms 23:17:33 policy-pap | request.timeout.ms = 30000 23:17:33 kafka | [2024-04-19 23:15:37,421] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.322823001Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 23:17:33 policy-pap | retries = 2147483647 23:17:33 kafka | [2024-04-19 23:15:37,421] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.323157134Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=333.953µs 23:17:33 policy-pap | retry.backoff.ms = 100 23:17:33 kafka | [2024-04-19 23:15:37,485] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.382736384Z level=info msg="Executing migration" id="create correlation table v1" 23:17:33 policy-pap | sasl.client.callback.handler.class = null 23:17:33 kafka | [2024-04-19 23:15:37,486] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.385179406Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=2.440852ms 23:17:33 policy-pap | sasl.jaas.config = null 23:17:33 kafka | [2024-04-19 23:15:37,486] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.482356688Z level=info msg="Executing migration" id="add index correlations.uid" 23:17:33 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:33 kafka | [2024-04-19 23:15:37,486] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.48485226Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=2.496252ms 23:17:33 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:17:33 kafka | [2024-04-19 23:15:37,486] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.688452492Z level=info msg="Executing migration" id="add index correlations.source_uid" 23:17:33 policy-pap | sasl.kerberos.service.name = null 23:17:33 kafka | [2024-04-19 23:15:37,615] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.690762103Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=2.314401ms 23:17:33 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:33 kafka | [2024-04-19 23:15:37,616] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.752347664Z level=info msg="Executing migration" id="add correlation config column" 23:17:33 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:33 kafka | [2024-04-19 23:15:37,616] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | sasl.login.callback.handler.class = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.763472987Z level=info msg="Migration successfully executed" id="add correlation config column" duration=11.126233ms 23:17:33 kafka | [2024-04-19 23:15:37,616] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | sasl.login.class = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.814365815Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 23:17:33 kafka | [2024-04-19 23:15:37,616] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 23:17:33 policy-pap | sasl.login.connect.timeout.ms = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.815826272Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.462487ms 23:17:33 kafka | [2024-04-19 23:15:37,754] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | sasl.login.read.timeout.ms = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.84000352Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 23:17:33 kafka | [2024-04-19 23:15:37,755] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:33 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.842356161Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=2.352241ms 23:17:33 kafka | [2024-04-19 23:15:37,755] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.937537725Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 23:17:33 kafka | [2024-04-19 23:15:37,755] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | sasl.login.refresh.window.factor = 0.8 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.963421611Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=25.887796ms 23:17:33 kafka | [2024-04-19 23:15:37,755] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:57.997811848Z level=info msg="Executing migration" id="create correlation v2" 23:17:33 kafka | [2024-04-19 23:15:37,820] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 23:17:33 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.000568862Z level=info msg="Migration successfully executed" id="create correlation v2" duration=2.757304ms 23:17:33 kafka | [2024-04-19 23:15:37,821] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | sasl.login.retry.backoff.ms = 100 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.036365546Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 23:17:33 kafka | [2024-04-19 23:15:37,821] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:33 policy-pap | sasl.mechanism = GSSAPI 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.039066829Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=2.704733ms 23:17:33 kafka | [2024-04-19 23:15:37,821] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.091696045Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:37,821] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-pap | sasl.oauthbearer.expected.audience = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.094598639Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=2.899404ms 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:37,945] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-pap | sasl.oauthbearer.expected.issuer = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.114097624Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 23:17:33 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:17:33 kafka | [2024-04-19 23:15:37,946] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.116596456Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=2.498652ms 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:37,946] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 23:17:33 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.122144133Z level=info msg="Executing migration" id="copy correlation v1 to v2" 23:17:33 kafka | [2024-04-19 23:15:37,946] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.122644426Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=500.323µs 23:17:33 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:33 kafka | [2024-04-19 23:15:37,947] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.126553764Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,160] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.127486819Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=932.415µs 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,161] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.134152141Z level=info msg="Executing migration" id="add provisioning column" 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,161] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 23:17:33 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.142762323Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.607402ms 23:17:33 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 23:17:33 kafka | [2024-04-19 23:15:38,161] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-pap | security.protocol = PLAINTEXT 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.149180294Z level=info msg="Executing migration" id="create entity_events table" 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,161] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-pap | security.providers = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.150441641Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.260777ms 23:17:33 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:33 kafka | [2024-04-19 23:15:38,281] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-pap | send.buffer.bytes = 131072 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.157934468Z level=info msg="Executing migration" id="create dashboard public config v1" 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,282] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.159926367Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.991219ms 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,282] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 23:17:33 policy-pap | socket.connection.setup.timeout.ms = 10000 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.168614499Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,283] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-pap | ssl.cipher.suites = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.169211552Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 23:17:33 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 23:17:33 kafka | [2024-04-19 23:15:38,283] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.175299362Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,401] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-pap | ssl.endpoint.identification.algorithm = https 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.176370827Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:17:33 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:33 kafka | [2024-04-19 23:15:38,402] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-pap | ssl.engine.factory.class = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.181493692Z level=info msg="Executing migration" id="Drop old dashboard public config table" 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,402] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 23:17:33 policy-pap | ssl.key.password = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.182395906Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=901.604µs 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,402] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-pap | ssl.keymanager.algorithm = SunX509 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.188258514Z level=info msg="Executing migration" id="recreate dashboard public config v1" 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,402] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-pap | ssl.keystore.certificate.chain = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.18937149Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.111376ms 23:17:33 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 23:17:33 kafka | [2024-04-19 23:15:38,413] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-pap | ssl.keystore.key = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.226224599Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,414] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-pap | ssl.keystore.location = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.228806432Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=2.580653ms 23:17:33 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:33 kafka | [2024-04-19 23:15:38,414] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 23:17:33 policy-pap | ssl.keystore.password = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.238494359Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,414] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-pap | ssl.keystore.type = JKS 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.239824266Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.329457ms 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,415] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-pap | ssl.protocol = TLSv1.3 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.246622258Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,423] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-pap | ssl.provider = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.24913216Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=2.512682ms 23:17:33 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:17:33 kafka | [2024-04-19 23:15:38,423] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-pap | ssl.secure.random.implementation = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.255074809Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,423] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 23:17:33 policy-pap | ssl.trustmanager.algorithm = PKIX 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.256938749Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.86468ms 23:17:33 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:33 kafka | [2024-04-19 23:15:38,423] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-pap | ssl.truststore.certificates = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.261634132Z level=info msg="Executing migration" id="Drop public config table" 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,424] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-pap | ssl.truststore.location = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.262998598Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.363626ms 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,432] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-pap | ssl.truststore.password = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.269265858Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,433] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:17:33 policy-pap | ssl.truststore.type = JKS 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.270436524Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.170086ms 23:17:33 kafka | [2024-04-19 23:15:38,433] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | transaction.timeout.ms = 60000 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.275164307Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 23:17:33 kafka | [2024-04-19 23:15:38,433] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:33 policy-pap | transactional.id = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.277456728Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=2.292561ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.283012765Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,433] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.284234282Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.221017ms 23:17:33 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:17:33 policy-pap | 23:17:33 kafka | [2024-04-19 23:15:38,441] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | [2024-04-19T23:15:35.460+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.291329986Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 23:17:33 kafka | [2024-04-19 23:15:38,442] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | [2024-04-19T23:15:35.474+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.293336476Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=2.00726ms 23:17:33 kafka | [2024-04-19 23:15:38,442] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 23:17:33 policy-pap | [2024-04-19T23:15:35.475+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.297812127Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 23:17:33 kafka | [2024-04-19 23:15:38,442] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:35.475+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713568535474 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.321184661Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=23.372654ms 23:17:33 kafka | [2024-04-19 23:15:38,442] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:33 policy-pap | [2024-04-19T23:15:35.475+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=09e4153a-ce4e-469f-82b3-ceca8064ff95, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.326445827Z level=info msg="Executing migration" id="add annotations_enabled column" 23:17:33 kafka | [2024-04-19 23:15:38,451] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:35.475+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=606e891c-d40a-469f-98ab-783668d1c537, alive=false, publisher=null]]: starting 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.333484701Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=7.039354ms 23:17:33 kafka | [2024-04-19 23:15:38,452] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | [2024-04-19T23:15:35.476+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.340496995Z level=info msg="Executing migration" id="add time_selection_enabled column" 23:17:33 kafka | [2024-04-19 23:15:38,452] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | acks = -1 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.349554029Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=9.056634ms 23:17:33 kafka | [2024-04-19 23:15:38,452] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:17:33 policy-pap | auto.include.jmx.reporter = true 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.353661569Z level=info msg="Executing migration" id="delete orphaned public dashboards" 23:17:33 kafka | [2024-04-19 23:15:38,452] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | batch.size = 16384 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.35390178Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=239.832µs 23:17:33 kafka | [2024-04-19 23:15:38,460] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:33 policy-pap | bootstrap.servers = [kafka:9092] 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.359463127Z level=info msg="Executing migration" id="add share column" 23:17:33 kafka | [2024-04-19 23:15:38,460] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | buffer.memory = 33554432 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.369023334Z level=info msg="Migration successfully executed" id="add share column" duration=9.559057ms 23:17:33 kafka | [2024-04-19 23:15:38,460] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | client.dns.lookup = use_all_dns_ips 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.375545755Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 23:17:33 kafka | [2024-04-19 23:15:38,460] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | client.id = producer-2 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.375940077Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=394.582µs 23:17:33 kafka | [2024-04-19 23:15:38,460] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-db-migrator | > upgrade 0100-pdp.sql 23:17:33 policy-pap | compression.type = none 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.396903119Z level=info msg="Executing migration" id="create file table" 23:17:33 kafka | [2024-04-19 23:15:38,468] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | connections.max.idle.ms = 540000 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.39905532Z level=info msg="Migration successfully executed" id="create file table" duration=2.152401ms 23:17:33 kafka | [2024-04-19 23:15:38,468] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 23:17:33 policy-pap | delivery.timeout.ms = 120000 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.431008245Z level=info msg="Executing migration" id="file table idx: path natural pk" 23:17:33 kafka | [2024-04-19 23:15:38,469] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | enable.idempotence = true 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.433441867Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=2.433372ms 23:17:33 kafka | [2024-04-19 23:15:38,469] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | interceptor.classes = [] 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.644340862Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 23:17:33 kafka | [2024-04-19 23:15:38,469] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.647014795Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=2.676373ms 23:17:33 kafka | [2024-04-19 23:15:38,478] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:17:33 policy-pap | linger.ms = 0 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.775321889Z level=info msg="Executing migration" id="create file_meta table" 23:17:33 kafka | [2024-04-19 23:15:38,479] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | max.block.ms = 60000 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.777229799Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.90799ms 23:17:33 kafka | [2024-04-19 23:15:38,479] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 23:17:33 policy-pap | max.in.flight.requests.per.connection = 5 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.812757731Z level=info msg="Executing migration" id="file table idx: path key" 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | max.request.size = 1048576 23:17:33 kafka | [2024-04-19 23:15:38,479] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:58.814290419Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.533748ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.008784604Z level=info msg="Executing migration" id="set path collation in file table" 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,479] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.009225406Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=421.112µs 23:17:33 policy-db-migrator | 23:17:33 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 23:17:33 kafka | [2024-04-19 23:15:38,519] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-pap | metadata.max.age.ms = 300000 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.031113963Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 23:17:33 policy-pap | metadata.max.idle.ms = 300000 23:17:33 kafka | [2024-04-19 23:15:38,520] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.031418954Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=306.001µs 23:17:33 policy-pap | metric.reporters = [] 23:17:33 kafka | [2024-04-19 23:15:38,520] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.057139369Z level=info msg="Executing migration" id="managed permissions migration" 23:17:33 policy-pap | metrics.num.samples = 2 23:17:33 kafka | [2024-04-19 23:15:38,520] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.058800058Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=1.657898ms 23:17:33 policy-pap | metrics.recording.level = INFO 23:17:33 kafka | [2024-04-19 23:15:38,520] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.085624388Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 23:17:33 policy-pap | metrics.sample.window.ms = 30000 23:17:33 kafka | [2024-04-19 23:15:38,530] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.087753978Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=2.12989ms 23:17:33 policy-pap | partitioner.adaptive.partitioning.enable = true 23:17:33 kafka | [2024-04-19 23:15:38,530] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.132905418Z level=info msg="Executing migration" id="RBAC action name migrator" 23:17:33 policy-pap | partitioner.availability.timeout.ms = 0 23:17:33 kafka | [2024-04-19 23:15:38,531] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.135745082Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=2.836734ms 23:17:33 policy-pap | partitioner.class = null 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,531] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.161620667Z level=info msg="Executing migration" id="Add UID column to playlist" 23:17:33 policy-pap | partitioner.ignore.keys = false 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,531] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.174306678Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=12.687001ms 23:17:33 policy-pap | receive.buffer.bytes = 32768 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,539] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.183500774Z level=info msg="Executing migration" id="Update uid column values in playlist" 23:17:33 policy-pap | reconnect.backoff.max.ms = 1000 23:17:33 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 23:17:33 kafka | [2024-04-19 23:15:38,539] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.183803935Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=302.781µs 23:17:33 policy-pap | reconnect.backoff.ms = 50 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,539] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.290243712Z level=info msg="Executing migration" id="Add index for uid in playlist" 23:17:33 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 23:17:33 kafka | [2024-04-19 23:15:38,540] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-pap | request.timeout.ms = 30000 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.293136616Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=2.892724ms 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,540] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-pap | retries = 2147483647 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.368754773Z level=info msg="Executing migration" id="update group index for alert rules" 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,547] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-pap | retry.backoff.ms = 100 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.369761147Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=1.007294ms 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,548] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-pap | sasl.client.callback.handler.class = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.382363679Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 23:17:33 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 23:17:33 kafka | [2024-04-19 23:15:38,548] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 23:17:33 policy-pap | sasl.jaas.config = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.382885932Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=493.393µs 23:17:33 kafka | [2024-04-19 23:15:38,548] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.408637547Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,548] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.409889012Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=1.251405ms 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,556] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-pap | sasl.kerberos.service.name = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.426362072Z level=info msg="Executing migration" id="add action column to seed_assignment" 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,556] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.444081629Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=17.714567ms 23:17:33 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 23:17:33 kafka | [2024-04-19 23:15:38,556] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 23:17:33 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.504271631Z level=info msg="Executing migration" id="add scope column to seed_assignment" 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,556] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-pap | sasl.login.callback.handler.class = null 23:17:33 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 23:17:33 kafka | [2024-04-19 23:15:38,557] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.515542425Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=11.272004ms 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.707534538Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,570] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-pap | sasl.login.class = null 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.710202172Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=2.669234ms 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,571] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-pap | sasl.login.connect.timeout.ms = null 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.775157127Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 23:17:33 kafka | [2024-04-19 23:15:38,571] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 23:17:33 policy-pap | sasl.login.read.timeout.ms = null 23:17:33 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 23:17:33 grafana | logger=migrator t=2024-04-19T23:14:59.857577926Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=82.421029ms 23:17:33 kafka | [2024-04-19 23:15:38,572] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:00.027974334Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 23:17:33 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:17:33 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 23:17:33 kafka | [2024-04-19 23:15:38,572] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:00.030631907Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=2.658153ms 23:17:33 policy-pap | sasl.login.refresh.window.factor = 0.8 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,580] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:00.199656967Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 23:17:33 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,580] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:00.201668257Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=2.0132ms 23:17:33 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,580] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:00.353850494Z level=info msg="Executing migration" id="add primary key to seed_assigment" 23:17:33 policy-pap | sasl.login.retry.backoff.ms = 100 23:17:33 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 23:17:33 kafka | [2024-04-19 23:15:38,580] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:00.390616184Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=36.77067ms 23:17:33 policy-pap | sasl.mechanism = GSSAPI 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,580] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:00.552549758Z level=info msg="Executing migration" id="add origin column to seed_assignment" 23:17:33 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:17:33 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 23:17:33 kafka | [2024-04-19 23:15:38,587] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:00.567795323Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=15.246365ms 23:17:33 policy-db-migrator | JOIN pdpstatistics b 23:17:33 kafka | [2024-04-19 23:15:38,587] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:00.722496713Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 23:17:33 policy-pap | sasl.oauthbearer.expected.audience = null 23:17:33 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 23:17:33 kafka | [2024-04-19 23:15:38,587] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:00.723419077Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=923.354µs 23:17:33 policy-pap | sasl.oauthbearer.expected.issuer = null 23:17:33 policy-db-migrator | SET a.id = b.id 23:17:33 kafka | [2024-04-19 23:15:38,588] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:00.897749303Z level=info msg="Executing migration" id="prevent seeding OnCall access" 23:17:33 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,588] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:00.898127215Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=374.262µs 23:17:33 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,593] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:01.15300441Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 23:17:33 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,594] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:01.153819614Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=816.234µs 23:17:33 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:17:33 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 23:17:33 kafka | [2024-04-19 23:15:38,594] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:01.193774577Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 23:17:33 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,594] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:01.19425955Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=489.103µs 23:17:33 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:17:33 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 23:17:33 kafka | [2024-04-19 23:15:38,594] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:01.232489765Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 23:17:33 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,602] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:01.233169798Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=679.953µs 23:17:33 policy-pap | security.protocol = PLAINTEXT 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,602] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:01.295516951Z level=info msg="Executing migration" id="create folder table" 23:17:33 policy-pap | security.providers = null 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,602] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:01.29765773Z level=info msg="Migration successfully executed" id="create folder table" duration=2.140649ms 23:17:33 policy-pap | send.buffer.bytes = 131072 23:17:33 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 23:17:33 kafka | [2024-04-19 23:15:38,602] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:01.494117562Z level=info msg="Executing migration" id="Add index for parent_uid" 23:17:33 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,603] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:01.497275648Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=3.158506ms 23:17:33 policy-pap | socket.connection.setup.timeout.ms = 10000 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 23:17:33 kafka | [2024-04-19 23:15:38,610] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:01.630753874Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 23:17:33 policy-pap | ssl.cipher.suites = null 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,610] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:01.639781747Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=9.028573ms 23:17:33 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,610] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:01.902993112Z level=info msg="Executing migration" id="Update folder title length" 23:17:33 policy-pap | ssl.endpoint.identification.algorithm = https 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,611] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:01.903267714Z level=info msg="Migration successfully executed" id="Update folder title length" duration=275.512µs 23:17:33 policy-pap | ssl.engine.factory.class = null 23:17:33 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 23:17:33 kafka | [2024-04-19 23:15:38,611] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:02.189880122Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 23:17:33 policy-pap | ssl.key.password = null 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,618] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:02.192670155Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=2.737333ms 23:17:33 policy-pap | ssl.keymanager.algorithm = SunX509 23:17:33 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 23:17:33 kafka | [2024-04-19 23:15:38,619] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:02.299739742Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 23:17:33 policy-pap | ssl.keystore.certificate.chain = null 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,619] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:02.302300225Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=2.562513ms 23:17:33 policy-pap | ssl.keystore.key = null 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,619] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:02.553715072Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 23:17:33 policy-pap | ssl.keystore.location = null 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,619] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:02.556908187Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=3.194775ms 23:17:33 policy-pap | ssl.keystore.password = null 23:17:33 policy-db-migrator | > upgrade 0210-sequence.sql 23:17:33 kafka | [2024-04-19 23:15:38,626] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:02.725607213Z level=info msg="Executing migration" id="Sync dashboard and folder table" 23:17:33 policy-pap | ssl.keystore.type = JKS 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,627] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:02.72702224Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=1.418107ms 23:17:33 policy-pap | ssl.protocol = TLSv1.3 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:17:33 kafka | [2024-04-19 23:15:38,627] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:03.132933292Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 23:17:33 policy-pap | ssl.provider = null 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,627] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:03.133786017Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=899.765µs 23:17:33 policy-pap | ssl.secure.random.implementation = null 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,627] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:03.221995973Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 23:17:33 policy-pap | ssl.trustmanager.algorithm = PKIX 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,634] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:03.224471725Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=2.477352ms 23:17:33 policy-pap | ssl.truststore.certificates = null 23:17:33 policy-db-migrator | > upgrade 0220-sequence.sql 23:17:33 kafka | [2024-04-19 23:15:38,634] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:03.371942598Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 23:17:33 policy-pap | ssl.truststore.location = null 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,634] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:03.37866777Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=6.727793ms 23:17:33 policy-pap | ssl.truststore.password = null 23:17:33 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:17:33 kafka | [2024-04-19 23:15:38,634] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:03.557704425Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 23:17:33 policy-pap | ssl.truststore.type = JKS 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,635] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:03.559965126Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=2.259091ms 23:17:33 policy-pap | transaction.timeout.ms = 60000 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,641] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:03.73893384Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 23:17:33 policy-pap | transactional.id = null 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,642] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:03.741716024Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=2.802324ms 23:17:33 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:17:33 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 23:17:33 kafka | [2024-04-19 23:15:38,642] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:03.820983857Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 23:17:33 policy-pap | 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,642] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:03.825000627Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=3.998769ms 23:17:33 policy-pap | [2024-04-19T23:15:35.476+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 23:17:33 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 23:17:33 kafka | [2024-04-19 23:15:38,642] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:03.888174351Z level=info msg="Executing migration" id="create anon_device table" 23:17:33 policy-pap | [2024-04-19T23:15:35.479+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,651] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:03.890613473Z level=info msg="Migration successfully executed" id="create anon_device table" duration=2.439382ms 23:17:33 policy-pap | [2024-04-19T23:15:35.479+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,651] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-pap | [2024-04-19T23:15:35.479+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1713568535479 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:04.073191225Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 23:17:33 policy-pap | [2024-04-19T23:15:35.479+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=606e891c-d40a-469f-98ab-783668d1c537, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:17:33 kafka | [2024-04-19 23:15:38,651] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:04.074907554Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.719549ms 23:17:33 policy-pap | [2024-04-19T23:15:35.479+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 23:17:33 kafka | [2024-04-19 23:15:38,652] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:04.228508075Z level=info msg="Executing migration" id="add index anon_device.updated_at" 23:17:33 policy-pap | [2024-04-19T23:15:35.479+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 23:17:33 kafka | [2024-04-19 23:15:38,652] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:04.231038566Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=2.531461ms 23:17:33 policy-pap | [2024-04-19T23:15:35.485+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 23:17:33 kafka | [2024-04-19 23:15:38,659] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:04.39547598Z level=info msg="Executing migration" id="create signing_key table" 23:17:33 policy-pap | [2024-04-19T23:15:35.486+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 23:17:33 kafka | [2024-04-19 23:15:38,660] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:04.39755447Z level=info msg="Migration successfully executed" id="create signing_key table" duration=2.07862ms 23:17:33 policy-pap | [2024-04-19T23:15:35.488+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 23:17:33 kafka | [2024-04-19 23:15:38,660] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:04.498357687Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 23:17:33 policy-pap | [2024-04-19T23:15:35.488+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 23:17:33 kafka | [2024-04-19 23:15:38,660] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | > upgrade 0120-toscatrigger.sql 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:04.523001255Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=24.643718ms 23:17:33 policy-pap | [2024-04-19T23:15:35.488+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 23:17:33 kafka | [2024-04-19 23:15:38,660] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:04.768479669Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 23:17:33 policy-pap | [2024-04-19T23:15:35.489+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 23:17:33 kafka | [2024-04-19 23:15:38,670] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:04.770797181Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=2.320062ms 23:17:33 policy-pap | [2024-04-19T23:15:35.490+00:00|INFO|TimerManager|Thread-9] timer manager update started 23:17:33 kafka | [2024-04-19 23:15:38,670] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-db-migrator | -------------- 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:04.895601403Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 23:17:33 policy-pap | [2024-04-19T23:15:35.495+00:00|INFO|ServiceManager|main] Policy PAP started 23:17:33 kafka | [2024-04-19 23:15:38,671] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:04.896435837Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=838.204µs 23:17:33 kafka | [2024-04-19 23:15:38,671] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | [2024-04-19T23:15:35.491+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:04.94262596Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 23:17:33 kafka | [2024-04-19 23:15:38,671] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 23:17:33 policy-pap | [2024-04-19T23:15:35.496+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.97 seconds (process running for 10.566) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:04.956459387Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=13.835677ms 23:17:33 kafka | [2024-04-19 23:15:38,708] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:35.953+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:05.115306792Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 23:17:33 kafka | [2024-04-19 23:15:38,709] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 23:17:33 policy-pap | [2024-04-19T23:15:35.966+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: pOvPZ_ZqQ6Wyt7DXYtLMbg 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:05.117063081Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.823369ms 23:17:33 kafka | [2024-04-19 23:15:38,709] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:35.957+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: pOvPZ_ZqQ6Wyt7DXYtLMbg 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:05.489755757Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 23:17:33 kafka | [2024-04-19 23:15:38,710] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | [2024-04-19T23:15:35.957+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: pOvPZ_ZqQ6Wyt7DXYtLMbg 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:05.493085403Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=3.332866ms 23:17:33 kafka | [2024-04-19 23:15:38,710] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | [2024-04-19T23:15:36.048+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:05.752566054Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 23:17:33 kafka | [2024-04-19 23:15:38,717] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-db-migrator | > upgrade 0140-toscaparameter.sql 23:17:33 policy-pap | [2024-04-19T23:15:36.153+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 5 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:05.755108236Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=2.545831ms 23:17:33 kafka | [2024-04-19 23:15:38,717] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:36.204+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:05.883274912Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 23:17:33 kafka | [2024-04-19 23:15:38,717] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 23:17:33 policy-pap | [2024-04-19T23:15:36.210+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3, groupId=8bb904e8-d607-4b4b-97e9-485d0625cc37] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:05.888325247Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=5.054345ms 23:17:33 kafka | [2024-04-19 23:15:38,718] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:36.210+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3, groupId=8bb904e8-d607-4b4b-97e9-485d0625cc37] Cluster ID: pOvPZ_ZqQ6Wyt7DXYtLMbg 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:06.176150773Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 23:17:33 kafka | [2024-04-19 23:15:38,718] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | [2024-04-19T23:15:36.214+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:06.178919577Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=2.768924ms 23:17:33 kafka | [2024-04-19 23:15:38,724] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | [2024-04-19T23:15:36.269+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:06.267506343Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 23:17:33 kafka | [2024-04-19 23:15:38,725] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-db-migrator | > upgrade 0150-toscaproperty.sql 23:17:33 policy-pap | [2024-04-19T23:15:36.322+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3, groupId=8bb904e8-d607-4b4b-97e9-485d0625cc37] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:06.270297576Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=2.793463ms 23:17:33 kafka | [2024-04-19 23:15:38,725] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:36.397+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:06.341819501Z level=info msg="Executing migration" id="create sso_setting table" 23:17:33 kafka | [2024-04-19 23:15:38,725] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:06.343864561Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=2.04518ms 23:17:33 policy-pap | [2024-04-19T23:15:36.467+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3, groupId=8bb904e8-d607-4b4b-97e9-485d0625cc37] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:33 kafka | [2024-04-19 23:15:38,725] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:06.391145649Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 23:17:33 policy-pap | [2024-04-19T23:15:36.609+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3, groupId=8bb904e8-d607-4b4b-97e9-485d0625cc37] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,733] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:06.392655616Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.511237ms 23:17:33 policy-pap | [2024-04-19T23:15:36.673+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,733] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:06.405043896Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 23:17:33 policy-pap | [2024-04-19T23:15:38.871+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,733] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:06.405441417Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=399.031µs 23:17:33 policy-pap | [2024-04-19T23:15:38.882+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:17:33 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 23:17:33 kafka | [2024-04-19 23:15:38,734] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:06.411154295Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 23:17:33 policy-pap | [2024-04-19T23:15:38.906+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3, groupId=8bb904e8-d607-4b4b-97e9-485d0625cc37] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,734] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:06.411244875Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=91.73µs 23:17:33 policy-pap | [2024-04-19T23:15:38.909+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3, groupId=8bb904e8-d607-4b4b-97e9-485d0625cc37] (Re-)joining group 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,741] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:06.415820827Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 23:17:33 policy-pap | [2024-04-19T23:15:38.913+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3, groupId=8bb904e8-d607-4b4b-97e9-485d0625cc37] Request joining group due to: need to re-join with the given member-id: consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3-2b8515c2-e006-4074-b437-f25065d1d76f 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,741] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:06.429277632Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=13.455345ms 23:17:33 policy-pap | [2024-04-19T23:15:38.913+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-6c84712f-3f1b-49d5-8456-eb3f8e91ad71 23:17:33 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:06.433179461Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,741] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 23:17:33 policy-pap | [2024-04-19T23:15:38.913+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:06.446511265Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=13.330754ms 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,742] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-pap | [2024-04-19T23:15:38.913+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:06.455988101Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 23:17:33 policy-db-migrator | 23:17:33 policy-pap | [2024-04-19T23:15:38.914+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3, groupId=8bb904e8-d607-4b4b-97e9-485d0625cc37] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:06.456439113Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=450.382µs 23:17:33 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 23:17:33 kafka | [2024-04-19 23:15:38,742] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 grafana | logger=migrator t=2024-04-19T23:15:06.462354241Z level=info msg="migrations completed" performed=548 skipped=0 duration=25.071358462s 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,749] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-pap | [2024-04-19T23:15:38.914+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3, groupId=8bb904e8-d607-4b4b-97e9-485d0625cc37] (Re-)joining group 23:17:33 grafana | logger=sqlstore t=2024-04-19T23:15:06.475730326Z level=info msg="Created default admin" user=admin 23:17:33 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 23:17:33 kafka | [2024-04-19 23:15:38,750] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-pap | [2024-04-19T23:15:41.594+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:17:33 grafana | logger=sqlstore t=2024-04-19T23:15:06.476017857Z level=info msg="Created default organization" 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,750] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 23:17:33 policy-pap | [2024-04-19T23:15:41.594+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 23:17:33 grafana | logger=secrets t=2024-04-19T23:15:06.483181451Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,750] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-pap | [2024-04-19T23:15:41.596+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 2 ms 23:17:33 grafana | logger=plugin.store t=2024-04-19T23:15:06.504253033Z level=info msg="Loading plugins..." 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:41.940+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3, groupId=8bb904e8-d607-4b4b-97e9-485d0625cc37] Successfully joined group with generation Generation{generationId=1, memberId='consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3-2b8515c2-e006-4074-b437-f25065d1d76f', protocol='range'} 23:17:33 grafana | logger=local.finder t=2024-04-19T23:15:06.546100594Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 23:17:33 kafka | [2024-04-19 23:15:38,750] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 23:17:33 policy-pap | [2024-04-19T23:15:41.942+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-6c84712f-3f1b-49d5-8456-eb3f8e91ad71', protocol='range'} 23:17:33 grafana | logger=plugin.store t=2024-04-19T23:15:06.546128024Z level=info msg="Plugins loaded" count=55 duration=41.875571ms 23:17:33 kafka | [2024-04-19 23:15:38,757] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:41.949+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-6c84712f-3f1b-49d5-8456-eb3f8e91ad71=Assignment(partitions=[policy-pdp-pap-0])} 23:17:33 grafana | logger=query_data t=2024-04-19T23:15:06.561006616Z level=info msg="Query Service initialization" 23:17:33 kafka | [2024-04-19 23:15:38,757] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | [2024-04-19T23:15:41.949+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3, groupId=8bb904e8-d607-4b4b-97e9-485d0625cc37] Finished assignment for group at generation 1: {consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3-2b8515c2-e006-4074-b437-f25065d1d76f=Assignment(partitions=[policy-pdp-pap-0])} 23:17:33 grafana | logger=live.push_http t=2024-04-19T23:15:06.566319731Z level=info msg="Live Push Gateway initialization" 23:17:33 kafka | [2024-04-19 23:15:38,757] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | [2024-04-19T23:15:41.998+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3, groupId=8bb904e8-d607-4b4b-97e9-485d0625cc37] Successfully synced group in generation Generation{generationId=1, memberId='consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3-2b8515c2-e006-4074-b437-f25065d1d76f', protocol='range'} 23:17:33 grafana | logger=ngalert.migration t=2024-04-19T23:15:06.571187685Z level=info msg=Starting 23:17:33 kafka | [2024-04-19 23:15:38,758] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 23:17:33 policy-pap | [2024-04-19T23:15:41.998+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3, groupId=8bb904e8-d607-4b4b-97e9-485d0625cc37] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:17:33 grafana | logger=ngalert.migration t=2024-04-19T23:15:06.571586507Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 23:17:33 kafka | [2024-04-19 23:15:38,758] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:42.002+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-6c84712f-3f1b-49d5-8456-eb3f8e91ad71', protocol='range'} 23:17:33 grafana | logger=ngalert.migration orgID=1 t=2024-04-19T23:15:06.572001218Z level=info msg="Migrating alerts for organisation" 23:17:33 kafka | [2024-04-19 23:15:38,766] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:17:33 policy-pap | [2024-04-19T23:15:42.002+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:17:33 kafka | [2024-04-19 23:15:38,766] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 grafana | logger=ngalert.migration orgID=1 t=2024-04-19T23:15:06.572632151Z level=info msg="Alerts found to migrate" alerts=0 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:42.005+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3, groupId=8bb904e8-d607-4b4b-97e9-485d0625cc37] Adding newly assigned partitions: policy-pdp-pap-0 23:17:33 kafka | [2024-04-19 23:15:38,766] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 23:17:33 grafana | logger=ngalert.migration t=2024-04-19T23:15:06.57438348Z level=info msg="Completed alerting migration" 23:17:33 policy-db-migrator | 23:17:33 policy-pap | [2024-04-19T23:15:42.007+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 23:17:33 kafka | [2024-04-19 23:15:38,766] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 grafana | logger=ngalert.state.manager t=2024-04-19T23:15:06.610139032Z level=info msg="Running in alternative execution of Error/NoData mode" 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:42.033+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 23:17:33 kafka | [2024-04-19 23:15:38,767] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 grafana | logger=infra.usagestats.collector t=2024-04-19T23:15:06.613757879Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 23:17:33 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 23:17:33 policy-pap | [2024-04-19T23:15:42.033+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3, groupId=8bb904e8-d607-4b4b-97e9-485d0625cc37] Found no committed offset for partition policy-pdp-pap-0 23:17:33 kafka | [2024-04-19 23:15:38,774] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 grafana | logger=provisioning.datasources t=2024-04-19T23:15:06.617526227Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:42.056+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3, groupId=8bb904e8-d607-4b4b-97e9-485d0625cc37] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:17:33 kafka | [2024-04-19 23:15:38,775] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 grafana | logger=provisioning.alerting t=2024-04-19T23:15:06.634022777Z level=info msg="starting to provision alerting" 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,775] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 23:17:33 policy-pap | [2024-04-19T23:15:42.062+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:17:33 grafana | logger=provisioning.alerting t=2024-04-19T23:15:06.634038817Z level=info msg="finished to provision alerting" 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,775] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-pap | [2024-04-19T23:15:56.951+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 23:17:33 grafana | logger=ngalert.state.manager t=2024-04-19T23:15:06.634964482Z level=info msg="Warming state cache for startup" 23:17:33 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 23:17:33 kafka | [2024-04-19 23:15:38,775] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-pap | [] 23:17:33 grafana | logger=ngalert.multiorg.alertmanager t=2024-04-19T23:15:06.635214343Z level=info msg="Starting MultiOrg Alertmanager" 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,787] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-pap | [2024-04-19T23:15:56.952+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:33 grafana | logger=ngalert.state.manager t=2024-04-19T23:15:06.635280513Z level=info msg="State cache has been initialized" states=0 duration=315.601µs 23:17:33 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 23:17:33 kafka | [2024-04-19 23:15:38,788] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4da61fd1-2308-426b-bac6-0f75d06b914a","timestampMs":1713568556907,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup"} 23:17:33 grafana | logger=ngalert.scheduler t=2024-04-19T23:15:06.635393944Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,788] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 23:17:33 policy-pap | [2024-04-19T23:15:56.952+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:17:33 grafana | logger=ticker t=2024-04-19T23:15:06.635463144Z level=info msg=starting first_tick=2024-04-19T23:15:10Z 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,788] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4da61fd1-2308-426b-bac6-0f75d06b914a","timestampMs":1713568556907,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup"} 23:17:33 grafana | logger=grafanaStorageLogger t=2024-04-19T23:15:06.635900956Z level=info msg="Storage starting" 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,788] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-pap | [2024-04-19T23:15:56.958+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:17:33 grafana | logger=http.server t=2024-04-19T23:15:06.643637184Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 23:17:33 policy-db-migrator | > upgrade 0100-upgrade.sql 23:17:33 kafka | [2024-04-19 23:15:38,797] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-pap | [2024-04-19T23:15:57.070+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpUpdate starting 23:17:33 grafana | logger=grafana.update.checker t=2024-04-19T23:15:06.72187073Z level=info msg="Update check succeeded" duration=86.951248ms 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,798] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-pap | [2024-04-19T23:15:57.070+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpUpdate starting listener 23:17:33 grafana | logger=plugins.update.checker t=2024-04-19T23:15:06.724878625Z level=info msg="Update check succeeded" duration=88.647747ms 23:17:33 policy-db-migrator | select 'upgrade to 1100 completed' as msg 23:17:33 kafka | [2024-04-19 23:15:38,798] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 23:17:33 policy-pap | [2024-04-19T23:15:57.071+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpUpdate starting timer 23:17:33 grafana | logger=provisioning.dashboard t=2024-04-19T23:15:06.741374884Z level=info msg="starting to provision dashboards" 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,799] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-pap | [2024-04-19T23:15:57.072+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=f2001688-7f3f-437b-bcaf-5fed37cf046d, expireMs=1713568587072] 23:17:33 grafana | logger=grafana-apiserver t=2024-04-19T23:15:07.042803664Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,799] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-pap | [2024-04-19T23:15:57.073+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=f2001688-7f3f-437b-bcaf-5fed37cf046d, expireMs=1713568587072] 23:17:33 grafana | logger=grafana-apiserver t=2024-04-19T23:15:07.043285326Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 23:17:33 policy-db-migrator | msg 23:17:33 kafka | [2024-04-19 23:15:38,807] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-pap | [2024-04-19T23:15:57.073+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpUpdate starting enqueue 23:17:33 grafana | logger=provisioning.dashboard t=2024-04-19T23:15:07.481721874Z level=info msg="finished to provision dashboards" 23:17:33 policy-db-migrator | upgrade to 1100 completed 23:17:33 policy-pap | [2024-04-19T23:15:57.074+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpUpdate started 23:17:33 kafka | [2024-04-19 23:15:38,808] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 grafana | logger=infra.usagestats t=2024-04-19T23:16:27.646659814Z level=info msg="Usage stats are ready to report" 23:17:33 policy-db-migrator | 23:17:33 policy-pap | [2024-04-19T23:15:57.076+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:17:33 kafka | [2024-04-19 23:15:38,808] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 23:17:33 policy-pap | {"source":"pap-9050d70a-be46-4913-b10f-1628466553aa","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f2001688-7f3f-437b-bcaf-5fed37cf046d","timestampMs":1713568557051,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:33 kafka | [2024-04-19 23:15:38,808] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:57.159+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:33 kafka | [2024-04-19 23:15:38,808] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 23:17:33 policy-pap | {"source":"pap-9050d70a-be46-4913-b10f-1628466553aa","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f2001688-7f3f-437b-bcaf-5fed37cf046d","timestampMs":1713568557051,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:33 kafka | [2024-04-19 23:15:38,816] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:57.160+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:17:33 kafka | [2024-04-19 23:15:38,817] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | [2024-04-19T23:15:57.175+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:17:33 kafka | [2024-04-19 23:15:38,817] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | {"source":"pap-9050d70a-be46-4913-b10f-1628466553aa","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f2001688-7f3f-437b-bcaf-5fed37cf046d","timestampMs":1713568557051,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:33 kafka | [2024-04-19 23:15:38,817] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:17:33 policy-pap | [2024-04-19T23:15:57.176+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:17:33 kafka | [2024-04-19 23:15:38,817] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:57.181+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:33 kafka | [2024-04-19 23:15:38,823] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:33 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f8f61dde-e709-42de-91d5-428aefe19d78","timestampMs":1713568557172,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup"} 23:17:33 kafka | [2024-04-19 23:15:38,823] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:33 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 23:17:33 policy-pap | [2024-04-19T23:15:57.182+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:17:33 kafka | [2024-04-19 23:15:38,823] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:57.182+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:33 kafka | [2024-04-19 23:15:38,823] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f2001688-7f3f-437b-bcaf-5fed37cf046d","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"a319096f-d675-414b-8f81-04373f266b58","timestampMs":1713568557172,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:33 kafka | [2024-04-19 23:15:38,823] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(d-1cpAONRW2xQxhn5w0MHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:57.182+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpUpdate stopping 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:17:33 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 23:17:33 policy-pap | [2024-04-19T23:15:57.182+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpUpdate stopping enqueue 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:57.183+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpUpdate stopping timer 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | [2024-04-19T23:15:57.183+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=f2001688-7f3f-437b-bcaf-5fed37cf046d, expireMs=1713568587072] 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | [2024-04-19T23:15:57.183+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpUpdate stopping listener 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:17:33 policy-db-migrator | > upgrade 0120-audit_sequence.sql 23:17:33 policy-pap | [2024-04-19T23:15:57.183+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpUpdate stopped 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:57.189+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpUpdate successful 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:17:33 policy-pap | [2024-04-19T23:15:57.189+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 start publishing next request 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:57.189+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpStateChange starting 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | [2024-04-19T23:15:57.189+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpStateChange starting listener 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:57.189+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpStateChange starting timer 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:17:33 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 23:17:33 policy-pap | [2024-04-19T23:15:57.189+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=096fb3cf-23ce-4b31-a646-611cda5e34f2, expireMs=1713568587189] 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:57.189+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpStateChange starting enqueue 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | [2024-04-19T23:15:57.189+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpStateChange started 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | [2024-04-19T23:15:57.189+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=096fb3cf-23ce-4b31-a646-611cda5e34f2, expireMs=1713568587189] 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:17:33 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 23:17:33 policy-pap | [2024-04-19T23:15:57.188+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f8f61dde-e709-42de-91d5-428aefe19d78","timestampMs":1713568557172,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup"} 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:17:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:17:33 policy-pap | [2024-04-19T23:15:57.202+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | {"source":"pap-9050d70a-be46-4913-b10f-1628466553aa","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"096fb3cf-23ce-4b31-a646-611cda5e34f2","timestampMs":1713568557053,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:57.344+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:17:33 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:17:33 policy-pap | {"source":"pap-9050d70a-be46-4913-b10f-1628466553aa","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"096fb3cf-23ce-4b31-a646-611cda5e34f2","timestampMs":1713568557053,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:57.344+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:17:33 policy-pap | [2024-04-19T23:15:57.347+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:17:33 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"096fb3cf-23ce-4b31-a646-611cda5e34f2","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"e9850bdf-6851-4267-9bfb-33502b76eae9","timestampMs":1713568557217,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:33 policy-db-migrator | TRUNCATE TABLE sequence 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:17:33 policy-pap | [2024-04-19T23:15:57.387+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpStateChange stopping 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:17:33 policy-pap | [2024-04-19T23:15:57.388+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpStateChange stopping enqueue 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:17:33 policy-pap | [2024-04-19T23:15:57.388+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpStateChange stopping timer 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:17:33 policy-pap | [2024-04-19T23:15:57.388+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=096fb3cf-23ce-4b31-a646-611cda5e34f2, expireMs=1713568587189] 23:17:33 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:17:33 policy-pap | [2024-04-19T23:15:57.388+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpStateChange stopping listener 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:17:33 policy-pap | [2024-04-19T23:15:57.388+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpStateChange stopped 23:17:33 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:17:33 policy-pap | [2024-04-19T23:15:57.389+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpStateChange successful 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:17:33 policy-pap | [2024-04-19T23:15:57.389+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 start publishing next request 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:17:33 policy-pap | [2024-04-19T23:15:57.389+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpUpdate starting 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:17:33 policy-pap | [2024-04-19T23:15:57.389+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpUpdate starting listener 23:17:33 policy-db-migrator | DROP TABLE pdpstatistics 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:17:33 policy-pap | [2024-04-19T23:15:57.389+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpUpdate starting timer 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:57.389+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=de469107-a803-46fd-b33d-5683cffa9592, expireMs=1713568587389] 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:17:33 policy-pap | [2024-04-19T23:15:57.389+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpUpdate starting enqueue 23:17:33 policy-db-migrator | 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:17:33 policy-pap | [2024-04-19T23:15:57.389+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpUpdate started 23:17:33 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:17:33 policy-pap | [2024-04-19T23:15:57.389+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:17:33 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f2001688-7f3f-437b-bcaf-5fed37cf046d","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"a319096f-d675-414b-8f81-04373f266b58","timestampMs":1713568557172,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:33 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:17:33 policy-pap | [2024-04-19T23:15:57.390+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:17:33 policy-db-migrator | -------------- 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:17:33 policy-pap | {"source":"pap-9050d70a-be46-4913-b10f-1628466553aa","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"de469107-a803-46fd-b33d-5683cffa9592","timestampMs":1713568557321,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | [2024-04-19T23:15:57.390+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id f2001688-7f3f-437b-bcaf-5fed37cf046d 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | [2024-04-19T23:15:57.400+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:17:33 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | {"source":"pap-9050d70a-be46-4913-b10f-1628466553aa","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"096fb3cf-23ce-4b31-a646-611cda5e34f2","timestampMs":1713568557053,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:17:33 policy-db-migrator | DROP TABLE statistics_sequence 23:17:33 policy-pap | [2024-04-19T23:15:57.400+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:17:33 policy-db-migrator | -------------- 23:17:33 policy-pap | [2024-04-19T23:15:57.403+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:17:33 policy-db-migrator | 23:17:33 policy-pap | {"source":"pap-9050d70a-be46-4913-b10f-1628466553aa","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"de469107-a803-46fd-b33d-5683cffa9592","timestampMs":1713568557321,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:17:33 policy-db-migrator | policyadmin: OK: upgrade (1300) 23:17:33 policy-pap | [2024-04-19T23:15:57.403+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:17:33 kafka | [2024-04-19 23:15:38,828] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:17:33 policy-db-migrator | name version 23:17:33 policy-pap | [2024-04-19T23:15:57.404+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:17:33 kafka | [2024-04-19 23:15:38,830] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | policyadmin 1300 23:17:33 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"096fb3cf-23ce-4b31-a646-611cda5e34f2","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"e9850bdf-6851-4267-9bfb-33502b76eae9","timestampMs":1713568557217,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:33 kafka | [2024-04-19 23:15:38,831] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | ID script operation from_version to_version tag success atTime 23:17:33 policy-pap | [2024-04-19T23:15:57.406+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 096fb3cf-23ce-4b31-a646-611cda5e34f2 23:17:33 kafka | [2024-04-19 23:15:38,834] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:45 23:17:33 policy-pap | [2024-04-19T23:15:57.417+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:17:33 kafka | [2024-04-19 23:15:38,834] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:46 23:17:33 policy-pap | {"source":"pap-9050d70a-be46-4913-b10f-1628466553aa","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"de469107-a803-46fd-b33d-5683cffa9592","timestampMs":1713568557321,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:33 kafka | [2024-04-19 23:15:38,834] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:46 23:17:33 policy-pap | [2024-04-19T23:15:57.417+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:17:33 kafka | [2024-04-19 23:15:38,834] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:46 23:17:33 policy-pap | [2024-04-19T23:15:57.418+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:33 kafka | [2024-04-19 23:15:38,834] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:46 23:17:33 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"de469107-a803-46fd-b33d-5683cffa9592","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"61ec1fb9-3764-402f-ac22-7d631f726aae","timestampMs":1713568557406,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:33 kafka | [2024-04-19 23:15:38,834] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:46 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-pap | [2024-04-19T23:15:57.418+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpUpdate stopping 23:17:33 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:47 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-pap | [2024-04-19T23:15:57.418+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpUpdate stopping enqueue 23:17:33 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:47 23:17:33 policy-pap | [2024-04-19T23:15:57.418+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpUpdate stopping timer 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:47 23:17:33 policy-pap | [2024-04-19T23:15:57.418+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=de469107-a803-46fd-b33d-5683cffa9592, expireMs=1713568587389] 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:47 23:17:33 policy-pap | [2024-04-19T23:15:57.418+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpUpdate stopping listener 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:47 23:17:33 policy-pap | [2024-04-19T23:15:57.418+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpUpdate stopped 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:47 23:17:33 policy-pap | [2024-04-19T23:15:57.423+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:47 23:17:33 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"de469107-a803-46fd-b33d-5683cffa9592","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"61ec1fb9-3764-402f-ac22-7d631f726aae","timestampMs":1713568557406,"name":"apex-a9c32f9f-93e6-4163-bce4-4482412a87f0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:47 23:17:33 policy-pap | [2024-04-19T23:15:57.424+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id de469107-a803-46fd-b33d-5683cffa9592 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:48 23:17:33 policy-pap | [2024-04-19T23:15:57.424+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 PdpUpdate successful 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:48 23:17:33 policy-pap | [2024-04-19T23:15:57.424+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-a9c32f9f-93e6-4163-bce4-4482412a87f0 has no more requests 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:48 23:17:33 policy-pap | [2024-04-19T23:16:02.513+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:48 23:17:33 policy-pap | [2024-04-19T23:16:02.585+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:49 23:17:33 policy-pap | [2024-04-19T23:16:02.594+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:49 23:17:33 policy-pap | [2024-04-19T23:16:02.598+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:49 23:17:33 policy-pap | [2024-04-19T23:16:02.998+00:00|INFO|SessionData|http-nio-6969-exec-7] unknown group testGroup 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:49 23:17:33 policy-pap | [2024-04-19T23:16:03.601+00:00|INFO|SessionData|http-nio-6969-exec-7] create cached group testGroup 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:49 23:17:33 policy-pap | [2024-04-19T23:16:03.601+00:00|INFO|SessionData|http-nio-6969-exec-7] creating DB group testGroup 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:49 23:17:33 policy-pap | [2024-04-19T23:16:04.131+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:49 23:17:33 policy-pap | [2024-04-19T23:16:04.342+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:49 23:17:33 policy-pap | [2024-04-19T23:16:04.442+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:49 23:17:33 policy-pap | [2024-04-19T23:16:04.442+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:49 23:17:33 policy-pap | [2024-04-19T23:16:04.442+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:50 23:17:33 policy-pap | [2024-04-19T23:16:04.457+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-04-19T23:16:04Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-04-19T23:16:04Z, user=policyadmin)] 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:50 23:17:33 policy-pap | [2024-04-19T23:16:05.150+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:50 23:17:33 policy-pap | [2024-04-19T23:16:05.151+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:51 23:17:33 policy-pap | [2024-04-19T23:16:05.152+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:51 23:17:33 policy-pap | [2024-04-19T23:16:05.152+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:51 23:17:33 policy-pap | [2024-04-19T23:16:05.152+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:52 23:17:33 policy-pap | [2024-04-19T23:16:05.174+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-19T23:16:05Z, user=policyadmin)] 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:52 23:17:33 policy-pap | [2024-04-19T23:16:05.520+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group defaultGroup 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:52 23:17:33 policy-pap | [2024-04-19T23:16:05.520+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:52 23:17:33 policy-pap | [2024-04-19T23:16:05.520+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 23:17:33 kafka | [2024-04-19 23:15:38,835] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-pap | [2024-04-19T23:16:05.520+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 23:17:33 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:52 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-pap | [2024-04-19T23:16:05.520+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup 23:17:33 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:52 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:52 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-pap | [2024-04-19T23:16:05.520+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup 23:17:33 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:52 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-pap | [2024-04-19T23:16:05.551+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-19T23:16:05Z, user=policyadmin)] 23:17:33 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:52 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-pap | [2024-04-19T23:16:26.151+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 23:17:33 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:53 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-pap | [2024-04-19T23:16:26.153+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 23:17:33 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:53 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-pap | [2024-04-19T23:16:27.073+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=f2001688-7f3f-437b-bcaf-5fed37cf046d, expireMs=1713568587072] 23:17:33 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:53 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-pap | [2024-04-19T23:16:27.190+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=096fb3cf-23ce-4b31-a646-611cda5e34f2, expireMs=1713568587189] 23:17:33 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:54 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:54 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:54 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:54 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:54 23:17:33 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:54 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:54 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:54 23:17:33 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:54 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:55 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:55 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:55 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:55 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:55 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:56 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:56 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:56 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:56 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:56 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:56 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:56 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:56 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:56 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:56 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:56 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:57 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:57 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:57 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:57 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:57 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:57 23:17:33 kafka | [2024-04-19 23:15:38,836] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:57 23:17:33 kafka | [2024-04-19 23:15:38,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:58 23:17:33 kafka | [2024-04-19 23:15:38,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:58 23:17:33 kafka | [2024-04-19 23:15:38,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:58 23:17:33 kafka | [2024-04-19 23:15:38,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:58 23:17:33 kafka | [2024-04-19 23:15:38,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:58 23:17:33 kafka | [2024-04-19 23:15:38,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:58 23:17:33 kafka | [2024-04-19 23:15:38,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:58 23:17:33 kafka | [2024-04-19 23:15:38,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:59 23:17:33 kafka | [2024-04-19 23:15:38,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:59 23:17:33 kafka | [2024-04-19 23:15:38,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:14:59 23:17:33 kafka | [2024-04-19 23:15:38,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:15:01 23:17:33 kafka | [2024-04-19 23:15:38,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:15:02 23:17:33 kafka | [2024-04-19 23:15:38,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:15:03 23:17:33 kafka | [2024-04-19 23:15:38,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:15:05 23:17:33 kafka | [2024-04-19 23:15:38,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:15:06 23:17:33 kafka | [2024-04-19 23:15:38,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:15:06 23:17:33 kafka | [2024-04-19 23:15:38,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:15:06 23:17:33 kafka | [2024-04-19 23:15:38,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1904242314450800u 1 2024-04-19 23:15:06 23:17:33 kafka | [2024-04-19 23:15:38,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 1904242314450900u 1 2024-04-19 23:15:06 23:17:33 kafka | [2024-04-19 23:15:38,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 1904242314450900u 1 2024-04-19 23:15:07 23:17:33 kafka | [2024-04-19 23:15:38,837] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:33 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 1904242314450900u 1 2024-04-19 23:15:07 23:17:33 kafka | [2024-04-19 23:15:38,837] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 1904242314450900u 1 2024-04-19 23:15:07 23:17:33 kafka | [2024-04-19 23:15:38,838] INFO [Broker id=1] Finished LeaderAndIsr request in 1686ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) 23:17:33 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 1904242314450900u 1 2024-04-19 23:15:07 23:17:33 kafka | [2024-04-19 23:15:38,838] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 6 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 1904242314450900u 1 2024-04-19 23:15:07 23:17:33 kafka | [2024-04-19 23:15:38,840] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1904242314450900u 1 2024-04-19 23:15:07 23:17:33 kafka | [2024-04-19 23:15:38,840] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1904242314450900u 1 2024-04-19 23:15:07 23:17:33 kafka | [2024-04-19 23:15:38,840] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1904242314450900u 1 2024-04-19 23:15:07 23:17:33 kafka | [2024-04-19 23:15:38,840] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 1904242314450900u 1 2024-04-19 23:15:07 23:17:33 kafka | [2024-04-19 23:15:38,841] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 1904242314450900u 1 2024-04-19 23:15:07 23:17:33 kafka | [2024-04-19 23:15:38,841] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 1904242314450900u 1 2024-04-19 23:15:07 23:17:33 kafka | [2024-04-19 23:15:38,841] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 1904242314450900u 1 2024-04-19 23:15:07 23:17:33 kafka | [2024-04-19 23:15:38,841] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 1904242314451000u 1 2024-04-19 23:15:07 23:17:33 kafka | [2024-04-19 23:15:38,841] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 1904242314451000u 1 2024-04-19 23:15:07 23:17:33 kafka | [2024-04-19 23:15:38,841] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 1904242314451000u 1 2024-04-19 23:15:08 23:17:33 kafka | [2024-04-19 23:15:38,841] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=d-1cpAONRW2xQxhn5w0MHg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:17:33 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 1904242314451000u 1 2024-04-19 23:15:08 23:17:33 kafka | [2024-04-19 23:15:38,841] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,841] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,842] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 7 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,842] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,842] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,842] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,842] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,842] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,842] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,842] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,842] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,843] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,843] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,843] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 1904242314451000u 1 2024-04-19 23:15:09 23:17:33 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 1904242314451000u 1 2024-04-19 23:15:09 23:17:33 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 1904242314451000u 1 2024-04-19 23:15:09 23:17:33 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 1904242314451000u 1 2024-04-19 23:15:09 23:17:33 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 1904242314451000u 1 2024-04-19 23:15:09 23:17:33 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 1904242314451100u 1 2024-04-19 23:15:09 23:17:33 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 1904242314451200u 1 2024-04-19 23:15:09 23:17:33 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 1904242314451200u 1 2024-04-19 23:15:09 23:17:33 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 1904242314451200u 1 2024-04-19 23:15:09 23:17:33 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 1904242314451200u 1 2024-04-19 23:15:09 23:17:33 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 1904242314451300u 1 2024-04-19 23:15:09 23:17:33 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 1904242314451300u 1 2024-04-19 23:15:09 23:17:33 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 1904242314451300u 1 2024-04-19 23:15:09 23:17:33 policy-db-migrator | policyadmin: OK @ 1300 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,844] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,845] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,845] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,845] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,845] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,845] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,845] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,845] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,845] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,845] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,845] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,845] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,845] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,845] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,845] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 9 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,845] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,845] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,845] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:17:33 kafka | [2024-04-19 23:15:38,845] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,845] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,846] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,846] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,846] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,846] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,846] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,846] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,846] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,846] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,846] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,846] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,846] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,846] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,847] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,847] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,847] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,847] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,847] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,847] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,847] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,847] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,847] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:33 kafka | [2024-04-19 23:15:38,907] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-6c84712f-3f1b-49d5-8456-eb3f8e91ad71 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:17:33 kafka | [2024-04-19 23:15:38,908] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f in Empty state. Created a new member id consumer-f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f-2-7a050cc1-a6f7-4a93-9366-5da526afdbfc and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:17:33 kafka | [2024-04-19 23:15:38,912] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 8bb904e8-d607-4b4b-97e9-485d0625cc37 in Empty state. Created a new member id consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3-2b8515c2-e006-4074-b437-f25065d1d76f and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:17:33 kafka | [2024-04-19 23:15:38,925] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-6c84712f-3f1b-49d5-8456-eb3f8e91ad71 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:17:33 kafka | [2024-04-19 23:15:38,925] INFO [GroupCoordinator 1]: Preparing to rebalance group 8bb904e8-d607-4b4b-97e9-485d0625cc37 in state PreparingRebalance with old generation 0 (__consumer_offsets-3) (reason: Adding new member consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3-2b8515c2-e006-4074-b437-f25065d1d76f with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:17:33 kafka | [2024-04-19 23:15:38,925] INFO [GroupCoordinator 1]: Preparing to rebalance group f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f in state PreparingRebalance with old generation 0 (__consumer_offsets-21) (reason: Adding new member consumer-f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f-2-7a050cc1-a6f7-4a93-9366-5da526afdbfc with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:17:33 kafka | [2024-04-19 23:15:41,937] INFO [GroupCoordinator 1]: Stabilized group 8bb904e8-d607-4b4b-97e9-485d0625cc37 generation 1 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:17:33 kafka | [2024-04-19 23:15:41,941] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:17:33 kafka | [2024-04-19 23:15:41,943] INFO [GroupCoordinator 1]: Stabilized group f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f generation 1 (__consumer_offsets-21) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:17:33 kafka | [2024-04-19 23:15:41,957] INFO [GroupCoordinator 1]: Assignment received from leader consumer-8bb904e8-d607-4b4b-97e9-485d0625cc37-3-2b8515c2-e006-4074-b437-f25065d1d76f for group 8bb904e8-d607-4b4b-97e9-485d0625cc37 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:17:33 kafka | [2024-04-19 23:15:41,957] INFO [GroupCoordinator 1]: Assignment received from leader consumer-f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f-2-7a050cc1-a6f7-4a93-9366-5da526afdbfc for group f18b460d-d1e6-48bc-8ff7-6e0ad0c0c20f for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:17:33 kafka | [2024-04-19 23:15:41,957] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-6c84712f-3f1b-49d5-8456-eb3f8e91ad71 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:17:33 ++ echo 'Tearing down containers...' 23:17:33 Tearing down containers... 23:17:33 ++ docker-compose down -v --remove-orphans 23:17:33 Stopping policy-apex-pdp ... 23:17:33 Stopping policy-pap ... 23:17:33 Stopping kafka ... 23:17:33 Stopping grafana ... 23:17:33 Stopping policy-api ... 23:17:33 Stopping zookeeper ... 23:17:33 Stopping mariadb ... 23:17:33 Stopping prometheus ... 23:17:33 Stopping simulator ... 23:17:34 Stopping grafana ... done 23:17:34 Stopping prometheus ... done 23:17:44 Stopping policy-apex-pdp ... done 23:17:54 Stopping simulator ... done 23:17:54 Stopping policy-pap ... done 23:17:55 Stopping mariadb ... done 23:17:55 Stopping kafka ... done 23:17:56 Stopping zookeeper ... done 23:18:04 Stopping policy-api ... done 23:18:04 Removing policy-apex-pdp ... 23:18:04 Removing policy-pap ... 23:18:04 Removing kafka ... 23:18:04 Removing grafana ... 23:18:04 Removing policy-api ... 23:18:04 Removing policy-db-migrator ... 23:18:04 Removing zookeeper ... 23:18:04 Removing mariadb ... 23:18:04 Removing prometheus ... 23:18:04 Removing simulator ... 23:18:04 Removing policy-api ... done 23:18:04 Removing policy-db-migrator ... done 23:18:04 Removing grafana ... done 23:18:05 Removing kafka ... done 23:18:05 Removing policy-apex-pdp ... done 23:18:05 Removing prometheus ... done 23:18:05 Removing simulator ... done 23:18:05 Removing zookeeper ... done 23:18:05 Removing policy-pap ... done 23:18:05 Removing mariadb ... done 23:18:05 Removing network compose_default 23:18:05 ++ cd /w/workspace/policy-pap-master-project-csit-pap 23:18:05 + load_set 23:18:05 + _setopts=hxB 23:18:05 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:18:05 ++ tr : ' ' 23:18:05 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:18:05 + set +o braceexpand 23:18:05 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:18:05 + set +o hashall 23:18:05 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:18:05 + set +o interactive-comments 23:18:05 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:18:05 + set +o xtrace 23:18:05 ++ echo hxB 23:18:05 ++ sed 's/./& /g' 23:18:05 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:18:05 + set +h 23:18:05 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:18:05 + set +x 23:18:05 + rsync /w/workspace/policy-pap-master-project-csit-pap/compose/docker_compose.log /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:18:05 + [[ -n /tmp/tmp.heIXjbTZZR ]] 23:18:05 + rsync -av /tmp/tmp.heIXjbTZZR/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:18:05 sending incremental file list 23:18:05 ./ 23:18:05 log.html 23:18:05 output.xml 23:18:05 report.html 23:18:05 testplan.txt 23:18:05 23:18:05 sent 919,238 bytes received 95 bytes 1,838,666.00 bytes/sec 23:18:05 total size is 918,693 speedup is 1.00 23:18:05 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:18:05 + exit 0 23:18:05 $ ssh-agent -k 23:18:05 unset SSH_AUTH_SOCK; 23:18:05 unset SSH_AGENT_PID; 23:18:05 echo Agent pid 2107 killed; 23:18:05 [ssh-agent] Stopped. 23:18:05 Robot results publisher started... 23:18:05 INFO: Checking test criticality is deprecated and will be dropped in a future release! 23:18:05 -Parsing output xml: 23:18:05 Done! 23:18:05 WARNING! Could not find file: **/log.html 23:18:05 WARNING! Could not find file: **/report.html 23:18:05 -Copying log files to build dir: 23:18:06 Done! 23:18:06 -Assigning results to build: 23:18:06 Done! 23:18:06 -Checking thresholds: 23:18:06 Done! 23:18:06 Done publishing Robot results. 23:18:06 [PostBuildScript] - [INFO] Executing post build scripts. 23:18:06 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7677877389294381162.sh 23:18:06 ---> sysstat.sh 23:18:06 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins15888300281289423997.sh 23:18:06 ---> package-listing.sh 23:18:06 ++ tr '[:upper:]' '[:lower:]' 23:18:06 ++ facter osfamily 23:18:06 + OS_FAMILY=debian 23:18:06 + workspace=/w/workspace/policy-pap-master-project-csit-pap 23:18:06 + START_PACKAGES=/tmp/packages_start.txt 23:18:06 + END_PACKAGES=/tmp/packages_end.txt 23:18:06 + DIFF_PACKAGES=/tmp/packages_diff.txt 23:18:06 + PACKAGES=/tmp/packages_start.txt 23:18:06 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:18:06 + PACKAGES=/tmp/packages_end.txt 23:18:06 + case "${OS_FAMILY}" in 23:18:06 + dpkg -l 23:18:06 + grep '^ii' 23:18:06 + '[' -f /tmp/packages_start.txt ']' 23:18:06 + '[' -f /tmp/packages_end.txt ']' 23:18:06 + diff /tmp/packages_start.txt /tmp/packages_end.txt 23:18:06 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:18:06 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:18:06 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:18:06 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10404182798384664840.sh 23:18:06 ---> capture-instance-metadata.sh 23:18:06 Setup pyenv: 23:18:06 system 23:18:06 3.8.13 23:18:06 3.9.13 23:18:06 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:18:07 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-mK3E from file:/tmp/.os_lf_venv 23:18:08 lf-activate-venv(): INFO: Installing: lftools 23:18:18 lf-activate-venv(): INFO: Adding /tmp/venv-mK3E/bin to PATH 23:18:18 INFO: Running in OpenStack, capturing instance metadata 23:18:18 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins6766121685240486949.sh 23:18:18 provisioning config files... 23:18:18 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config3188624926550557959tmp 23:18:18 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 23:18:18 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 23:18:18 [EnvInject] - Injecting environment variables from a build step. 23:18:18 [EnvInject] - Injecting as environment variables the properties content 23:18:18 SERVER_ID=logs 23:18:18 23:18:18 [EnvInject] - Variables injected successfully. 23:18:18 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7540447382306178920.sh 23:18:18 ---> create-netrc.sh 23:18:18 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4029778794763016495.sh 23:18:18 ---> python-tools-install.sh 23:18:18 Setup pyenv: 23:18:18 system 23:18:18 3.8.13 23:18:18 3.9.13 23:18:18 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:18:18 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-mK3E from file:/tmp/.os_lf_venv 23:18:20 lf-activate-venv(): INFO: Installing: lftools 23:18:28 lf-activate-venv(): INFO: Adding /tmp/venv-mK3E/bin to PATH 23:18:28 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins5434814035533948228.sh 23:18:28 ---> sudo-logs.sh 23:18:28 Archiving 'sudo' log.. 23:18:28 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins6654639458493805317.sh 23:18:28 ---> job-cost.sh 23:18:28 Setup pyenv: 23:18:28 system 23:18:28 3.8.13 23:18:28 3.9.13 23:18:28 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:18:28 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-mK3E from file:/tmp/.os_lf_venv 23:18:30 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 23:18:35 lf-activate-venv(): INFO: Adding /tmp/venv-mK3E/bin to PATH 23:18:35 INFO: No Stack... 23:18:36 INFO: Retrieving Pricing Info for: v3-standard-8 23:18:36 INFO: Archiving Costs 23:18:36 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins11736003609730546922.sh 23:18:36 ---> logs-deploy.sh 23:18:36 Setup pyenv: 23:18:36 system 23:18:36 3.8.13 23:18:36 3.9.13 23:18:36 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:18:36 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-mK3E from file:/tmp/.os_lf_venv 23:18:38 lf-activate-venv(): INFO: Installing: lftools 23:18:48 lf-activate-venv(): INFO: Adding /tmp/venv-mK3E/bin to PATH 23:18:48 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1651 23:18:48 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 23:18:49 Archives upload complete. 23:18:49 INFO: archiving logs to Nexus 23:18:50 ---> uname -a: 23:18:50 Linux prd-ubuntu1804-docker-8c-8g-24474 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 23:18:50 23:18:50 23:18:50 ---> lscpu: 23:18:50 Architecture: x86_64 23:18:50 CPU op-mode(s): 32-bit, 64-bit 23:18:50 Byte Order: Little Endian 23:18:50 CPU(s): 8 23:18:50 On-line CPU(s) list: 0-7 23:18:50 Thread(s) per core: 1 23:18:50 Core(s) per socket: 1 23:18:50 Socket(s): 8 23:18:50 NUMA node(s): 1 23:18:50 Vendor ID: AuthenticAMD 23:18:50 CPU family: 23 23:18:50 Model: 49 23:18:50 Model name: AMD EPYC-Rome Processor 23:18:50 Stepping: 0 23:18:50 CPU MHz: 2799.998 23:18:50 BogoMIPS: 5599.99 23:18:50 Virtualization: AMD-V 23:18:50 Hypervisor vendor: KVM 23:18:50 Virtualization type: full 23:18:50 L1d cache: 32K 23:18:50 L1i cache: 32K 23:18:50 L2 cache: 512K 23:18:50 L3 cache: 16384K 23:18:50 NUMA node0 CPU(s): 0-7 23:18:50 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 23:18:50 23:18:50 23:18:50 ---> nproc: 23:18:50 8 23:18:50 23:18:50 23:18:50 ---> df -h: 23:18:50 Filesystem Size Used Avail Use% Mounted on 23:18:50 udev 16G 0 16G 0% /dev 23:18:50 tmpfs 3.2G 708K 3.2G 1% /run 23:18:50 /dev/vda1 155G 14G 142G 9% / 23:18:50 tmpfs 16G 0 16G 0% /dev/shm 23:18:50 tmpfs 5.0M 0 5.0M 0% /run/lock 23:18:50 tmpfs 16G 0 16G 0% /sys/fs/cgroup 23:18:50 /dev/vda15 105M 4.4M 100M 5% /boot/efi 23:18:50 tmpfs 3.2G 0 3.2G 0% /run/user/1001 23:18:50 23:18:50 23:18:50 ---> free -m: 23:18:50 total used free shared buff/cache available 23:18:50 Mem: 32167 838 25175 0 6153 30873 23:18:50 Swap: 1023 0 1023 23:18:50 23:18:50 23:18:50 ---> ip addr: 23:18:50 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 23:18:50 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 23:18:50 inet 127.0.0.1/8 scope host lo 23:18:50 valid_lft forever preferred_lft forever 23:18:50 inet6 ::1/128 scope host 23:18:50 valid_lft forever preferred_lft forever 23:18:50 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 23:18:50 link/ether fa:16:3e:d5:7c:81 brd ff:ff:ff:ff:ff:ff 23:18:50 inet 10.30.106.14/23 brd 10.30.107.255 scope global dynamic ens3 23:18:50 valid_lft 85893sec preferred_lft 85893sec 23:18:50 inet6 fe80::f816:3eff:fed5:7c81/64 scope link 23:18:50 valid_lft forever preferred_lft forever 23:18:50 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 23:18:50 link/ether 02:42:cb:68:ab:27 brd ff:ff:ff:ff:ff:ff 23:18:50 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 23:18:50 valid_lft forever preferred_lft forever 23:18:50 23:18:50 23:18:50 ---> sar -b -r -n DEV: 23:18:50 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-24474) 04/19/24 _x86_64_ (8 CPU) 23:18:50 23:18:50 23:10:26 LINUX RESTART (8 CPU) 23:18:50 23:18:50 23:11:02 tps rtps wtps bread/s bwrtn/s 23:18:50 23:12:01 114.15 41.03 73.12 1860.43 31436.98 23:18:50 23:13:01 131.73 22.86 108.87 2742.34 37124.48 23:18:50 23:14:01 190.95 0.28 190.67 40.53 99795.50 23:18:50 23:15:01 324.16 12.73 311.43 763.54 75548.74 23:18:50 23:16:01 74.20 0.80 73.40 56.26 28156.19 23:18:50 23:17:01 20.01 0.03 19.98 0.27 25195.40 23:18:50 23:18:01 47.67 0.03 47.63 5.33 4383.02 23:18:50 Average: 129.02 11.04 117.98 778.65 43118.36 23:18:50 23:18:50 23:11:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:18:50 23:12:01 30185144 31757908 2754076 8.36 68560 1815320 1422208 4.18 813784 1651816 158312 23:18:50 23:13:01 29521444 31703596 3417776 10.38 90152 2381864 1580800 4.65 953136 2128320 372520 23:18:50 23:14:01 26752356 31682672 6186864 18.78 134412 4961724 1425396 4.19 1002464 4699156 645776 23:18:50 23:15:01 25040304 30910716 7898916 23.98 153660 5831736 7066128 20.79 1876200 5432264 1932 23:18:50 23:16:01 23640736 29629760 9298484 28.23 157144 5941140 8751172 25.75 3237652 5456316 464 23:18:50 23:17:01 23639852 29629448 9299368 28.23 157308 5941368 8740592 25.72 3238420 5455048 292 23:18:50 23:18:01 25114260 31121484 7824960 23.76 158176 5968408 2399184 7.06 1818256 5451068 72 23:18:50 Average: 26270585 30919369 6668635 20.25 131345 4691651 4483640 13.19 1848559 4324855 168481 23:18:50 23:18:50 23:11:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:18:50 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:50 23:12:01 ens3 145.50 97.75 1005.64 31.86 0.00 0.00 0.00 0.00 23:18:50 23:12:01 lo 1.63 1.63 0.18 0.18 0.00 0.00 0.00 0.00 23:18:50 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:50 23:13:01 ens3 117.48 86.79 2537.67 11.70 0.00 0.00 0.00 0.00 23:18:50 23:13:01 br-f6444a636e7b 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:50 23:13:01 lo 5.13 5.13 0.49 0.49 0.00 0.00 0.00 0.00 23:18:50 23:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:50 23:14:01 ens3 870.69 363.27 18905.29 26.28 0.00 0.00 0.00 0.00 23:18:50 23:14:01 br-f6444a636e7b 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:50 23:14:01 lo 6.00 6.00 0.61 0.61 0.00 0.00 0.00 0.00 23:18:50 23:15:01 vethcdadc45 1.83 2.10 0.18 0.20 0.00 0.00 0.00 0.00 23:18:50 23:15:01 vethdd7801d 0.77 1.00 0.05 0.06 0.00 0.00 0.00 0.00 23:18:50 23:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:50 23:15:01 ens3 257.21 135.04 9949.98 9.83 0.00 0.00 0.00 0.00 23:18:50 23:16:01 vethcdadc45 15.55 15.43 2.11 2.13 0.00 0.00 0.00 0.00 23:18:50 23:16:01 vethdd7801d 45.66 39.49 17.14 39.83 0.00 0.00 0.00 0.00 23:18:50 23:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:50 23:16:01 ens3 8.83 6.02 2.35 2.07 0.00 0.00 0.00 0.00 23:18:50 23:17:01 vethcdadc45 13.83 9.33 1.05 1.34 0.00 0.00 0.00 0.00 23:18:50 23:17:01 vethdd7801d 0.47 0.47 0.63 0.08 0.00 0.00 0.00 0.00 23:18:50 23:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:50 23:17:01 ens3 2.43 2.18 0.46 0.66 0.00 0.00 0.00 0.00 23:18:50 23:18:01 vethdd7801d 0.38 0.67 0.02 0.04 0.00 0.00 0.00 0.00 23:18:50 23:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:50 23:18:01 ens3 22.11 19.56 7.35 18.00 0.00 0.00 0.00 0.00 23:18:50 23:18:01 br-f6444a636e7b 4.25 4.70 1.99 2.19 0.00 0.00 0.00 0.00 23:18:50 Average: vethdd7801d 6.77 5.96 2.55 5.73 0.00 0.00 0.00 0.00 23:18:50 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:50 Average: ens3 203.60 101.52 4638.36 14.30 0.00 0.00 0.00 0.00 23:18:50 Average: br-f6444a636e7b 0.61 0.67 0.28 0.31 0.00 0.00 0.00 0.00 23:18:50 23:18:50 23:18:50 ---> sar -P ALL: 23:18:50 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-24474) 04/19/24 _x86_64_ (8 CPU) 23:18:50 23:18:50 23:10:26 LINUX RESTART (8 CPU) 23:18:50 23:18:50 23:11:02 CPU %user %nice %system %iowait %steal %idle 23:18:50 23:12:01 all 10.42 0.00 0.87 3.16 0.04 85.51 23:18:50 23:12:01 0 1.71 0.00 0.54 0.14 0.02 97.60 23:18:50 23:12:01 1 30.31 0.00 1.83 2.31 0.05 65.50 23:18:50 23:12:01 2 23.20 0.00 1.42 0.92 0.03 74.42 23:18:50 23:12:01 3 12.46 0.00 1.07 0.36 0.03 86.08 23:18:50 23:12:01 4 1.52 0.00 0.83 10.47 0.05 87.13 23:18:50 23:12:01 5 9.65 0.00 0.56 0.25 0.10 89.43 23:18:50 23:12:01 6 2.03 0.00 0.44 0.25 0.00 97.27 23:18:50 23:12:01 7 2.44 0.00 0.25 10.64 0.02 86.64 23:18:50 23:13:01 all 9.76 0.00 0.99 2.85 0.03 86.37 23:18:50 23:13:01 0 16.26 0.00 1.80 0.50 0.03 81.40 23:18:50 23:13:01 1 2.24 0.00 0.68 0.13 0.02 96.93 23:18:50 23:13:01 2 4.43 0.00 0.43 0.70 0.02 94.42 23:18:50 23:13:01 3 12.17 0.00 0.80 1.20 0.07 85.75 23:18:50 23:13:01 4 1.86 0.00 1.09 6.08 0.05 90.92 23:18:50 23:13:01 5 0.98 0.00 0.53 0.87 0.00 97.61 23:18:50 23:13:01 6 4.22 0.00 0.64 9.47 0.03 85.65 23:18:50 23:13:01 7 35.86 0.00 1.92 3.89 0.05 58.28 23:18:50 23:14:01 all 9.61 0.00 4.08 11.33 0.06 74.92 23:18:50 23:14:01 0 9.66 0.00 4.43 0.59 0.07 85.25 23:18:50 23:14:01 1 10.18 0.00 4.44 4.78 0.05 80.56 23:18:50 23:14:01 2 12.46 0.00 4.10 0.07 0.05 83.33 23:18:50 23:14:01 3 9.01 0.00 3.66 15.16 0.05 72.11 23:18:50 23:14:01 4 8.55 0.00 4.14 0.81 0.07 86.44 23:18:50 23:14:01 5 10.94 0.00 3.79 0.20 0.05 85.02 23:18:50 23:14:01 6 8.71 0.00 3.97 23.75 0.07 63.50 23:18:50 23:14:01 7 7.35 0.00 4.04 45.41 0.07 43.13 23:18:50 23:15:01 all 9.01 0.00 2.74 12.20 0.06 76.00 23:18:50 23:15:01 0 9.52 0.00 3.22 4.55 0.05 82.66 23:18:50 23:15:01 1 8.95 0.00 2.86 4.30 0.08 83.80 23:18:50 23:15:01 2 9.94 0.00 2.90 10.91 0.07 76.18 23:18:50 23:15:01 3 7.72 0.00 2.43 9.46 0.07 80.33 23:18:50 23:15:01 4 9.30 0.00 3.02 8.40 0.05 79.23 23:18:50 23:15:01 5 11.29 0.00 2.90 30.30 0.07 55.45 23:18:50 23:15:01 6 9.27 0.00 2.50 18.18 0.05 70.00 23:18:50 23:15:01 7 6.09 0.00 2.13 11.53 0.05 80.20 23:18:50 23:16:01 all 23.65 0.00 2.53 4.23 0.08 69.50 23:18:50 23:16:01 0 24.66 0.00 2.77 2.85 0.08 69.64 23:18:50 23:16:01 1 26.11 0.00 2.61 3.45 0.08 67.74 23:18:50 23:16:01 2 22.58 0.00 2.82 14.61 0.10 59.89 23:18:50 23:16:01 3 24.92 0.00 2.75 7.92 0.07 64.33 23:18:50 23:16:01 4 18.66 0.00 1.86 2.00 0.10 77.38 23:18:50 23:16:01 5 27.90 0.00 2.78 1.86 0.08 67.38 23:18:50 23:16:01 6 26.72 0.00 2.65 0.94 0.08 69.61 23:18:50 23:16:01 7 17.66 0.00 2.01 0.23 0.08 80.02 23:18:50 23:17:01 all 3.63 0.00 0.38 1.47 0.06 94.46 23:18:50 23:17:01 0 3.63 0.00 0.38 10.80 0.07 85.12 23:18:50 23:17:01 1 4.89 0.00 0.50 0.00 0.05 94.56 23:18:50 23:17:01 2 2.23 0.00 0.18 0.15 0.05 97.38 23:18:50 23:17:01 3 5.05 0.00 0.55 0.42 0.08 93.90 23:18:50 23:17:01 4 3.22 0.00 0.27 0.05 0.07 96.39 23:18:50 23:17:01 5 3.27 0.00 0.27 0.00 0.05 96.41 23:18:50 23:17:01 6 3.14 0.00 0.38 0.22 0.07 96.19 23:18:50 23:17:01 7 3.62 0.00 0.47 0.18 0.05 95.68 23:18:50 23:18:01 all 1.47 0.00 0.49 0.34 0.05 97.65 23:18:50 23:18:01 0 1.24 0.00 0.53 1.69 0.03 96.51 23:18:50 23:18:01 1 1.27 0.00 0.52 0.12 0.07 98.03 23:18:50 23:18:01 2 1.32 0.00 0.52 0.07 0.03 98.06 23:18:50 23:18:01 3 3.47 0.00 0.40 0.05 0.07 96.02 23:18:50 23:18:01 4 1.05 0.00 0.38 0.12 0.03 98.42 23:18:50 23:18:01 5 1.25 0.00 0.52 0.05 0.03 98.15 23:18:50 23:18:01 6 0.75 0.00 0.57 0.62 0.05 98.01 23:18:50 23:18:01 7 1.36 0.00 0.50 0.02 0.03 98.09 23:18:50 Average: all 9.64 0.00 1.72 5.07 0.05 83.51 23:18:50 Average: 0 9.53 0.00 1.95 3.02 0.05 85.44 23:18:50 Average: 1 11.95 0.00 1.92 2.15 0.06 83.93 23:18:50 Average: 2 10.85 0.00 1.77 3.92 0.05 83.41 23:18:50 Average: 3 10.67 0.00 1.66 4.92 0.06 82.68 23:18:50 Average: 4 6.30 0.00 1.65 3.97 0.06 88.01 23:18:50 Average: 5 9.31 0.00 1.62 4.78 0.06 84.24 23:18:50 Average: 6 7.83 0.00 1.59 7.60 0.05 82.93 23:18:50 Average: 7 10.66 0.00 1.62 10.21 0.05 77.47 23:18:50 23:18:50 23:18:50