23:11:00 Started by timer 23:11:00 Running as SYSTEM 23:11:00 [EnvInject] - Loading node environment variables. 23:11:00 Building remotely on prd-ubuntu1804-docker-8c-8g-13424 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 23:11:00 [ssh-agent] Looking for ssh-agent implementation... 23:11:00 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 23:11:00 $ ssh-agent 23:11:00 SSH_AUTH_SOCK=/tmp/ssh-dHwYGvWDeR5I/agent.2076 23:11:00 SSH_AGENT_PID=2078 23:11:00 [ssh-agent] Started. 23:11:00 Running ssh-add (command line suppressed) 23:11:00 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_9201560483265832117.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_9201560483265832117.key) 23:11:00 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 23:11:00 The recommended git tool is: NONE 23:11:02 using credential onap-jenkins-ssh 23:11:02 Wiping out workspace first. 23:11:02 Cloning the remote Git repository 23:11:02 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 23:11:02 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 23:11:02 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 23:11:02 > git --version # timeout=10 23:11:02 > git --version # 'git version 2.17.1' 23:11:02 using GIT_SSH to set credentials Gerrit user 23:11:02 Verifying host key using manually-configured host key entries 23:11:02 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 23:11:03 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 23:11:03 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 23:11:03 Avoid second fetch 23:11:03 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 23:11:03 Checking out Revision 9e33a52d0cf03c0458911330fb72037d01b07a4a (refs/remotes/origin/master) 23:11:03 > git config core.sparsecheckout # timeout=10 23:11:03 > git checkout -f 9e33a52d0cf03c0458911330fb72037d01b07a4a # timeout=30 23:11:03 Commit message: "Add Prometheus config for http and k8s participants in csit" 23:11:03 > git rev-list --no-walk 9e33a52d0cf03c0458911330fb72037d01b07a4a # timeout=10 23:11:03 provisioning config files... 23:11:03 copy managed file [npmrc] to file:/home/jenkins/.npmrc 23:11:03 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 23:11:03 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10389856696790846078.sh 23:11:03 ---> python-tools-install.sh 23:11:03 Setup pyenv: 23:11:04 * system (set by /opt/pyenv/version) 23:11:04 * 3.8.13 (set by /opt/pyenv/version) 23:11:04 * 3.9.13 (set by /opt/pyenv/version) 23:11:04 * 3.10.6 (set by /opt/pyenv/version) 23:11:08 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-48nb 23:11:08 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 23:11:11 lf-activate-venv(): INFO: Installing: lftools 23:11:43 lf-activate-venv(): INFO: Adding /tmp/venv-48nb/bin to PATH 23:11:43 Generating Requirements File 23:12:11 Python 3.10.6 23:12:11 pip 24.0 from /tmp/venv-48nb/lib/python3.10/site-packages/pip (python 3.10) 23:12:11 appdirs==1.4.4 23:12:11 argcomplete==3.2.3 23:12:11 aspy.yaml==1.3.0 23:12:11 attrs==23.2.0 23:12:11 autopage==0.5.2 23:12:11 beautifulsoup4==4.12.3 23:12:11 boto3==1.34.64 23:12:11 botocore==1.34.64 23:12:11 bs4==0.0.2 23:12:11 cachetools==5.3.3 23:12:11 certifi==2024.2.2 23:12:11 cffi==1.16.0 23:12:11 cfgv==3.4.0 23:12:11 chardet==5.2.0 23:12:11 charset-normalizer==3.3.2 23:12:11 click==8.1.7 23:12:11 cliff==4.6.0 23:12:11 cmd2==2.4.3 23:12:11 cryptography==3.3.2 23:12:11 debtcollector==3.0.0 23:12:11 decorator==5.1.1 23:12:11 defusedxml==0.7.1 23:12:11 Deprecated==1.2.14 23:12:11 distlib==0.3.8 23:12:11 dnspython==2.6.1 23:12:11 docker==4.2.2 23:12:11 dogpile.cache==1.3.2 23:12:11 email_validator==2.1.1 23:12:11 filelock==3.13.1 23:12:11 future==1.0.0 23:12:11 gitdb==4.0.11 23:12:11 GitPython==3.1.42 23:12:11 google-auth==2.28.2 23:12:11 httplib2==0.22.0 23:12:11 identify==2.5.35 23:12:11 idna==3.6 23:12:11 importlib-resources==1.5.0 23:12:11 iso8601==2.1.0 23:12:11 Jinja2==3.1.3 23:12:11 jmespath==1.0.1 23:12:11 jsonpatch==1.33 23:12:11 jsonpointer==2.4 23:12:11 jsonschema==4.21.1 23:12:11 jsonschema-specifications==2023.12.1 23:12:11 keystoneauth1==5.6.0 23:12:11 kubernetes==29.0.0 23:12:11 lftools==0.37.10 23:12:11 lxml==5.1.0 23:12:11 MarkupSafe==2.1.5 23:12:11 msgpack==1.0.8 23:12:11 multi_key_dict==2.0.3 23:12:11 munch==4.0.0 23:12:11 netaddr==1.2.1 23:12:11 netifaces==0.11.0 23:12:11 niet==1.4.2 23:12:11 nodeenv==1.8.0 23:12:11 oauth2client==4.1.3 23:12:11 oauthlib==3.2.2 23:12:11 openstacksdk==3.0.0 23:12:11 os-client-config==2.1.0 23:12:11 os-service-types==1.7.0 23:12:11 osc-lib==3.0.1 23:12:11 oslo.config==9.4.0 23:12:11 oslo.context==5.5.0 23:12:11 oslo.i18n==6.3.0 23:12:11 oslo.log==5.5.0 23:12:11 oslo.serialization==5.4.0 23:12:11 oslo.utils==7.1.0 23:12:11 packaging==24.0 23:12:11 pbr==6.0.0 23:12:11 platformdirs==4.2.0 23:12:11 prettytable==3.10.0 23:12:11 pyasn1==0.5.1 23:12:11 pyasn1-modules==0.3.0 23:12:11 pycparser==2.21 23:12:11 pygerrit2==2.0.15 23:12:11 PyGithub==2.2.0 23:12:11 pyinotify==0.9.6 23:12:11 PyJWT==2.8.0 23:12:11 PyNaCl==1.5.0 23:12:11 pyparsing==2.4.7 23:12:11 pyperclip==1.8.2 23:12:11 pyrsistent==0.20.0 23:12:11 python-cinderclient==9.5.0 23:12:11 python-dateutil==2.9.0.post0 23:12:11 python-heatclient==3.5.0 23:12:11 python-jenkins==1.8.2 23:12:11 python-keystoneclient==5.4.0 23:12:11 python-magnumclient==4.4.0 23:12:11 python-novaclient==18.6.0 23:12:11 python-openstackclient==6.5.0 23:12:11 python-swiftclient==4.5.0 23:12:11 PyYAML==6.0.1 23:12:11 referencing==0.33.0 23:12:11 requests==2.31.0 23:12:11 requests-oauthlib==1.4.0 23:12:11 requestsexceptions==1.4.0 23:12:11 rfc3986==2.0.0 23:12:11 rpds-py==0.18.0 23:12:11 rsa==4.9 23:12:11 ruamel.yaml==0.18.6 23:12:11 ruamel.yaml.clib==0.2.8 23:12:11 s3transfer==0.10.1 23:12:11 simplejson==3.19.2 23:12:11 six==1.16.0 23:12:11 smmap==5.0.1 23:12:11 soupsieve==2.5 23:12:11 stevedore==5.2.0 23:12:11 tabulate==0.9.0 23:12:11 toml==0.10.2 23:12:11 tomlkit==0.12.4 23:12:11 tqdm==4.66.2 23:12:11 typing_extensions==4.10.0 23:12:11 tzdata==2024.1 23:12:11 urllib3==1.26.18 23:12:11 virtualenv==20.25.1 23:12:11 wcwidth==0.2.13 23:12:11 websocket-client==1.7.0 23:12:11 wrapt==1.16.0 23:12:11 xdg==6.0.0 23:12:11 xmltodict==0.13.0 23:12:11 yq==3.2.3 23:12:11 [EnvInject] - Injecting environment variables from a build step. 23:12:11 [EnvInject] - Injecting as environment variables the properties content 23:12:11 SET_JDK_VERSION=openjdk17 23:12:11 GIT_URL="git://cloud.onap.org/mirror" 23:12:11 23:12:11 [EnvInject] - Variables injected successfully. 23:12:11 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins17016876437216414179.sh 23:12:11 ---> update-java-alternatives.sh 23:12:11 ---> Updating Java version 23:12:11 ---> Ubuntu/Debian system detected 23:12:12 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 23:12:12 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 23:12:12 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 23:12:12 openjdk version "17.0.4" 2022-07-19 23:12:12 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 23:12:12 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 23:12:12 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 23:12:12 [EnvInject] - Injecting environment variables from a build step. 23:12:12 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 23:12:12 [EnvInject] - Variables injected successfully. 23:12:12 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins1463812614679641306.sh 23:12:12 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 23:12:12 + set +u 23:12:12 + save_set 23:12:12 + RUN_CSIT_SAVE_SET=ehxB 23:12:12 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 23:12:12 + '[' 1 -eq 0 ']' 23:12:12 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:12 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:12 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:12 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:12 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:12 + export ROBOT_VARIABLES= 23:12:12 + ROBOT_VARIABLES= 23:12:12 + export PROJECT=pap 23:12:12 + PROJECT=pap 23:12:12 + cd /w/workspace/policy-pap-master-project-csit-pap 23:12:12 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:12 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:12 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:12 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 23:12:12 + relax_set 23:12:12 + set +e 23:12:12 + set +o pipefail 23:12:12 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:12 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:12 +++ mktemp -d 23:12:12 ++ ROBOT_VENV=/tmp/tmp.Yl1wrOOp9K 23:12:12 ++ echo ROBOT_VENV=/tmp/tmp.Yl1wrOOp9K 23:12:12 +++ python3 --version 23:12:12 ++ echo 'Python version is: Python 3.6.9' 23:12:12 Python version is: Python 3.6.9 23:12:12 ++ python3 -m venv --clear /tmp/tmp.Yl1wrOOp9K 23:12:13 ++ source /tmp/tmp.Yl1wrOOp9K/bin/activate 23:12:13 +++ deactivate nondestructive 23:12:13 +++ '[' -n '' ']' 23:12:13 +++ '[' -n '' ']' 23:12:13 +++ '[' -n /bin/bash -o -n '' ']' 23:12:13 +++ hash -r 23:12:13 +++ '[' -n '' ']' 23:12:13 +++ unset VIRTUAL_ENV 23:12:13 +++ '[' '!' nondestructive = nondestructive ']' 23:12:13 +++ VIRTUAL_ENV=/tmp/tmp.Yl1wrOOp9K 23:12:13 +++ export VIRTUAL_ENV 23:12:13 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:13 +++ PATH=/tmp/tmp.Yl1wrOOp9K/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:13 +++ export PATH 23:12:13 +++ '[' -n '' ']' 23:12:13 +++ '[' -z '' ']' 23:12:13 +++ _OLD_VIRTUAL_PS1= 23:12:13 +++ '[' 'x(tmp.Yl1wrOOp9K) ' '!=' x ']' 23:12:13 +++ PS1='(tmp.Yl1wrOOp9K) ' 23:12:13 +++ export PS1 23:12:13 +++ '[' -n /bin/bash -o -n '' ']' 23:12:13 +++ hash -r 23:12:13 ++ set -exu 23:12:13 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 23:12:17 ++ echo 'Installing Python Requirements' 23:12:17 Installing Python Requirements 23:12:17 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 23:12:35 ++ python3 -m pip -qq freeze 23:12:36 bcrypt==4.0.1 23:12:36 beautifulsoup4==4.12.3 23:12:36 bitarray==2.9.2 23:12:36 certifi==2024.2.2 23:12:36 cffi==1.15.1 23:12:36 charset-normalizer==2.0.12 23:12:36 cryptography==40.0.2 23:12:36 decorator==5.1.1 23:12:36 elasticsearch==7.17.9 23:12:36 elasticsearch-dsl==7.4.1 23:12:36 enum34==1.1.10 23:12:36 idna==3.6 23:12:36 importlib-resources==5.4.0 23:12:36 ipaddr==2.2.0 23:12:36 isodate==0.6.1 23:12:36 jmespath==0.10.0 23:12:36 jsonpatch==1.32 23:12:36 jsonpath-rw==1.4.0 23:12:36 jsonpointer==2.3 23:12:36 lxml==5.1.0 23:12:36 netaddr==0.8.0 23:12:36 netifaces==0.11.0 23:12:36 odltools==0.1.28 23:12:36 paramiko==3.4.0 23:12:36 pkg_resources==0.0.0 23:12:36 ply==3.11 23:12:36 pyang==2.6.0 23:12:36 pyangbind==0.8.1 23:12:36 pycparser==2.21 23:12:36 pyhocon==0.3.60 23:12:36 PyNaCl==1.5.0 23:12:36 pyparsing==3.1.2 23:12:36 python-dateutil==2.9.0.post0 23:12:36 regex==2023.8.8 23:12:36 requests==2.27.1 23:12:36 robotframework==6.1.1 23:12:36 robotframework-httplibrary==0.4.2 23:12:36 robotframework-pythonlibcore==3.0.0 23:12:36 robotframework-requests==0.9.4 23:12:36 robotframework-selenium2library==3.0.0 23:12:36 robotframework-seleniumlibrary==5.1.3 23:12:36 robotframework-sshlibrary==3.8.0 23:12:36 scapy==2.5.0 23:12:36 scp==0.14.5 23:12:36 selenium==3.141.0 23:12:36 six==1.16.0 23:12:36 soupsieve==2.3.2.post1 23:12:36 urllib3==1.26.18 23:12:36 waitress==2.0.0 23:12:36 WebOb==1.8.7 23:12:36 WebTest==3.0.0 23:12:36 zipp==3.6.0 23:12:36 ++ mkdir -p /tmp/tmp.Yl1wrOOp9K/src/onap 23:12:36 ++ rm -rf /tmp/tmp.Yl1wrOOp9K/src/onap/testsuite 23:12:36 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 23:12:41 ++ echo 'Installing python confluent-kafka library' 23:12:41 Installing python confluent-kafka library 23:12:41 ++ python3 -m pip install -qq confluent-kafka 23:12:43 ++ echo 'Uninstall docker-py and reinstall docker.' 23:12:43 Uninstall docker-py and reinstall docker. 23:12:43 ++ python3 -m pip uninstall -y -qq docker 23:12:43 ++ python3 -m pip install -U -qq docker 23:12:44 ++ python3 -m pip -qq freeze 23:12:45 bcrypt==4.0.1 23:12:45 beautifulsoup4==4.12.3 23:12:45 bitarray==2.9.2 23:12:45 certifi==2024.2.2 23:12:45 cffi==1.15.1 23:12:45 charset-normalizer==2.0.12 23:12:45 confluent-kafka==2.3.0 23:12:45 cryptography==40.0.2 23:12:45 decorator==5.1.1 23:12:45 deepdiff==5.7.0 23:12:45 dnspython==2.2.1 23:12:45 docker==5.0.3 23:12:45 elasticsearch==7.17.9 23:12:45 elasticsearch-dsl==7.4.1 23:12:45 enum34==1.1.10 23:12:45 future==1.0.0 23:12:45 idna==3.6 23:12:45 importlib-resources==5.4.0 23:12:45 ipaddr==2.2.0 23:12:45 isodate==0.6.1 23:12:45 Jinja2==3.0.3 23:12:45 jmespath==0.10.0 23:12:45 jsonpatch==1.32 23:12:45 jsonpath-rw==1.4.0 23:12:45 jsonpointer==2.3 23:12:45 kafka-python==2.0.2 23:12:45 lxml==5.1.0 23:12:45 MarkupSafe==2.0.1 23:12:45 more-itertools==5.0.0 23:12:45 netaddr==0.8.0 23:12:45 netifaces==0.11.0 23:12:45 odltools==0.1.28 23:12:45 ordered-set==4.0.2 23:12:45 paramiko==3.4.0 23:12:45 pbr==6.0.0 23:12:45 pkg_resources==0.0.0 23:12:45 ply==3.11 23:12:45 protobuf==3.19.6 23:12:45 pyang==2.6.0 23:12:45 pyangbind==0.8.1 23:12:45 pycparser==2.21 23:12:45 pyhocon==0.3.60 23:12:45 PyNaCl==1.5.0 23:12:45 pyparsing==3.1.2 23:12:45 python-dateutil==2.9.0.post0 23:12:45 PyYAML==6.0.1 23:12:45 regex==2023.8.8 23:12:45 requests==2.27.1 23:12:45 robotframework==6.1.1 23:12:45 robotframework-httplibrary==0.4.2 23:12:45 robotframework-onap==0.6.0.dev105 23:12:45 robotframework-pythonlibcore==3.0.0 23:12:45 robotframework-requests==0.9.4 23:12:45 robotframework-selenium2library==3.0.0 23:12:45 robotframework-seleniumlibrary==5.1.3 23:12:45 robotframework-sshlibrary==3.8.0 23:12:45 robotlibcore-temp==1.0.2 23:12:45 scapy==2.5.0 23:12:45 scp==0.14.5 23:12:45 selenium==3.141.0 23:12:45 six==1.16.0 23:12:45 soupsieve==2.3.2.post1 23:12:45 urllib3==1.26.18 23:12:45 waitress==2.0.0 23:12:45 WebOb==1.8.7 23:12:45 websocket-client==1.3.1 23:12:45 WebTest==3.0.0 23:12:45 zipp==3.6.0 23:12:45 ++ uname 23:12:45 ++ grep -q Linux 23:12:45 ++ sudo apt-get -y -qq install libxml2-utils 23:12:45 + load_set 23:12:45 + _setopts=ehuxB 23:12:45 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 23:12:45 ++ tr : ' ' 23:12:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:45 + set +o braceexpand 23:12:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:45 + set +o hashall 23:12:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:45 + set +o interactive-comments 23:12:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:45 + set +o nounset 23:12:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:45 + set +o xtrace 23:12:45 ++ sed 's/./& /g' 23:12:45 ++ echo ehuxB 23:12:45 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:45 + set +e 23:12:45 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:45 + set +h 23:12:45 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:45 + set +u 23:12:45 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:45 + set +x 23:12:45 + source_safely /tmp/tmp.Yl1wrOOp9K/bin/activate 23:12:45 + '[' -z /tmp/tmp.Yl1wrOOp9K/bin/activate ']' 23:12:45 + relax_set 23:12:45 + set +e 23:12:45 + set +o pipefail 23:12:45 + . /tmp/tmp.Yl1wrOOp9K/bin/activate 23:12:45 ++ deactivate nondestructive 23:12:45 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 23:12:45 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:45 ++ export PATH 23:12:45 ++ unset _OLD_VIRTUAL_PATH 23:12:45 ++ '[' -n '' ']' 23:12:45 ++ '[' -n /bin/bash -o -n '' ']' 23:12:45 ++ hash -r 23:12:45 ++ '[' -n '' ']' 23:12:45 ++ unset VIRTUAL_ENV 23:12:45 ++ '[' '!' nondestructive = nondestructive ']' 23:12:45 ++ VIRTUAL_ENV=/tmp/tmp.Yl1wrOOp9K 23:12:45 ++ export VIRTUAL_ENV 23:12:45 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:45 ++ PATH=/tmp/tmp.Yl1wrOOp9K/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:45 ++ export PATH 23:12:45 ++ '[' -n '' ']' 23:12:45 ++ '[' -z '' ']' 23:12:45 ++ _OLD_VIRTUAL_PS1='(tmp.Yl1wrOOp9K) ' 23:12:45 ++ '[' 'x(tmp.Yl1wrOOp9K) ' '!=' x ']' 23:12:45 ++ PS1='(tmp.Yl1wrOOp9K) (tmp.Yl1wrOOp9K) ' 23:12:45 ++ export PS1 23:12:45 ++ '[' -n /bin/bash -o -n '' ']' 23:12:45 ++ hash -r 23:12:45 + load_set 23:12:45 + _setopts=hxB 23:12:45 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:12:45 ++ tr : ' ' 23:12:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:45 + set +o braceexpand 23:12:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:45 + set +o hashall 23:12:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:45 + set +o interactive-comments 23:12:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:45 + set +o xtrace 23:12:45 ++ sed 's/./& /g' 23:12:45 ++ echo hxB 23:12:45 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:45 + set +h 23:12:45 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:45 + set +x 23:12:45 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:45 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:45 + export TEST_OPTIONS= 23:12:45 + TEST_OPTIONS= 23:12:45 ++ mktemp -d 23:12:45 + WORKDIR=/tmp/tmp.Xn1lruRwEW 23:12:45 + cd /tmp/tmp.Xn1lruRwEW 23:12:45 + docker login -u docker -p docker nexus3.onap.org:10001 23:12:45 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 23:12:45 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 23:12:45 Configure a credential helper to remove this warning. See 23:12:45 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 23:12:45 23:12:45 Login Succeeded 23:12:45 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:45 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:45 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 23:12:45 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:45 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:45 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:45 + relax_set 23:12:45 + set +e 23:12:45 + set +o pipefail 23:12:45 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:45 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 23:12:45 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:45 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 23:12:45 +++ GERRIT_BRANCH=master 23:12:45 +++ echo GERRIT_BRANCH=master 23:12:45 GERRIT_BRANCH=master 23:12:45 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:12:45 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 23:12:45 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 23:12:45 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 23:12:46 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:46 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:46 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:46 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:46 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:46 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:46 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 23:12:46 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:46 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:12:46 +++ grafana=false 23:12:46 +++ gui=false 23:12:46 +++ [[ 2 -gt 0 ]] 23:12:46 +++ key=apex-pdp 23:12:46 +++ case $key in 23:12:46 +++ echo apex-pdp 23:12:46 apex-pdp 23:12:46 +++ component=apex-pdp 23:12:46 +++ shift 23:12:46 +++ [[ 1 -gt 0 ]] 23:12:46 +++ key=--grafana 23:12:46 +++ case $key in 23:12:46 +++ grafana=true 23:12:46 +++ shift 23:12:46 +++ [[ 0 -gt 0 ]] 23:12:46 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:12:46 +++ echo 'Configuring docker compose...' 23:12:46 Configuring docker compose... 23:12:46 +++ source export-ports.sh 23:12:46 +++ source get-versions.sh 23:12:48 +++ '[' -z pap ']' 23:12:48 +++ '[' -n apex-pdp ']' 23:12:48 +++ '[' apex-pdp == logs ']' 23:12:48 +++ '[' true = true ']' 23:12:48 +++ echo 'Starting apex-pdp application with Grafana' 23:12:48 Starting apex-pdp application with Grafana 23:12:48 +++ docker-compose up -d apex-pdp grafana 23:12:49 Creating network "compose_default" with the default driver 23:12:49 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 23:12:49 latest: Pulling from prom/prometheus 23:12:52 Digest: sha256:bc1794e85c9e00293351b967efa267ce6af1c824ac875a9d0c7ac84700a8b53e 23:12:52 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 23:12:52 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 23:12:52 latest: Pulling from grafana/grafana 23:12:58 Digest: sha256:f9811e4e687ffecf1a43adb9b64096c50bc0d7a782f8608530f478b6542de7d5 23:12:58 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 23:12:58 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 23:12:58 10.10.2: Pulling from mariadb 23:13:02 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 23:13:02 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 23:13:02 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 23:13:02 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator 23:13:06 Digest: sha256:5772a5c551b30d73f901debb8dc38f305559b920e248a9ccb1dba3b880278a13 23:13:06 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT 23:13:06 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 23:13:06 latest: Pulling from confluentinc/cp-zookeeper 23:13:17 Digest: sha256:9babd1c0beaf93189982bdbb9fe4bf194a2730298b640c057817746c19838866 23:13:17 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 23:13:17 Pulling kafka (confluentinc/cp-kafka:latest)... 23:13:17 latest: Pulling from confluentinc/cp-kafka 23:13:20 Digest: sha256:24cdd3a7fa89d2bed150560ebea81ff1943badfa61e51d66bb541a6b0d7fb047 23:13:20 Status: Downloaded newer image for confluentinc/cp-kafka:latest 23:13:20 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 23:13:20 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator 23:13:24 Digest: sha256:37b4f26d0170f90ca974aea8100c4fea8bf2a2b3b5cdb1e4e7c97492d3a4ad6a 23:13:24 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT 23:13:24 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 23:13:24 3.1.2-SNAPSHOT: Pulling from onap/policy-api 23:13:28 Digest: sha256:fdc9aa26830be0af882248f5f576f0e9466b8e17ff432e8618d01432efa85803 23:13:28 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT 23:13:28 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 23:13:28 3.1.2-SNAPSHOT: Pulling from onap/policy-pap 23:13:30 Digest: sha256:5e7bdea16830f0dd3e16df519f0efbee38922192c2a79297bcac6699fa44e067 23:13:30 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT 23:13:30 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 23:13:30 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp 23:13:36 Digest: sha256:3f9880e060c3465862043c69561fa1d43ab448175d1adf3efd53d751d3b9947d 23:13:36 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT 23:13:37 Creating prometheus ... 23:13:37 Creating compose_zookeeper_1 ... 23:13:37 Creating simulator ... 23:13:37 Creating mariadb ... 23:13:44 Creating mariadb ... done 23:13:44 Creating policy-db-migrator ... 23:13:45 Creating simulator ... done 23:13:46 Creating compose_zookeeper_1 ... done 23:13:46 Creating kafka ... 23:13:47 Creating kafka ... done 23:13:48 Creating prometheus ... done 23:13:48 Creating grafana ... 23:13:49 Creating grafana ... done 23:13:50 Creating policy-db-migrator ... done 23:13:50 Creating policy-api ... 23:13:51 Creating policy-api ... done 23:13:51 Creating policy-pap ... 23:13:52 Creating policy-pap ... done 23:13:52 Creating policy-apex-pdp ... 23:13:53 Creating policy-apex-pdp ... done 23:13:53 +++ echo 'Prometheus server: http://localhost:30259' 23:13:53 Prometheus server: http://localhost:30259 23:13:53 +++ echo 'Grafana server: http://localhost:30269' 23:13:53 Grafana server: http://localhost:30269 23:13:53 +++ cd /w/workspace/policy-pap-master-project-csit-pap 23:13:53 ++ sleep 10 23:14:03 ++ unset http_proxy https_proxy 23:14:03 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 23:14:03 Waiting for REST to come up on localhost port 30003... 23:14:03 NAMES STATUS 23:14:03 policy-apex-pdp Up 10 seconds 23:14:03 policy-pap Up 11 seconds 23:14:03 policy-api Up 11 seconds 23:14:03 grafana Up 14 seconds 23:14:03 kafka Up 15 seconds 23:14:03 mariadb Up 18 seconds 23:14:03 simulator Up 17 seconds 23:14:03 compose_zookeeper_1 Up 16 seconds 23:14:03 prometheus Up 14 seconds 23:14:08 NAMES STATUS 23:14:08 policy-apex-pdp Up 15 seconds 23:14:08 policy-pap Up 16 seconds 23:14:08 policy-api Up 17 seconds 23:14:08 grafana Up 19 seconds 23:14:08 kafka Up 20 seconds 23:14:08 mariadb Up 23 seconds 23:14:08 simulator Up 22 seconds 23:14:08 compose_zookeeper_1 Up 21 seconds 23:14:08 prometheus Up 19 seconds 23:14:13 NAMES STATUS 23:14:13 policy-apex-pdp Up 20 seconds 23:14:13 policy-pap Up 21 seconds 23:14:13 policy-api Up 22 seconds 23:14:13 grafana Up 24 seconds 23:14:13 kafka Up 25 seconds 23:14:13 mariadb Up 28 seconds 23:14:13 simulator Up 28 seconds 23:14:13 compose_zookeeper_1 Up 26 seconds 23:14:13 prometheus Up 25 seconds 23:14:18 NAMES STATUS 23:14:18 policy-apex-pdp Up 25 seconds 23:14:18 policy-pap Up 26 seconds 23:14:18 policy-api Up 27 seconds 23:14:18 grafana Up 29 seconds 23:14:18 kafka Up 30 seconds 23:14:18 mariadb Up 34 seconds 23:14:18 simulator Up 33 seconds 23:14:18 compose_zookeeper_1 Up 31 seconds 23:14:18 prometheus Up 30 seconds 23:14:23 NAMES STATUS 23:14:23 policy-apex-pdp Up 30 seconds 23:14:23 policy-pap Up 31 seconds 23:14:23 policy-api Up 32 seconds 23:14:23 grafana Up 34 seconds 23:14:23 kafka Up 35 seconds 23:14:23 mariadb Up 39 seconds 23:14:23 simulator Up 38 seconds 23:14:23 compose_zookeeper_1 Up 36 seconds 23:14:23 prometheus Up 35 seconds 23:14:23 ++ export 'SUITES=pap-test.robot 23:14:23 pap-slas.robot' 23:14:23 ++ SUITES='pap-test.robot 23:14:23 pap-slas.robot' 23:14:23 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:23 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:14:23 + load_set 23:14:23 + _setopts=hxB 23:14:23 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:14:23 ++ tr : ' ' 23:14:23 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:23 + set +o braceexpand 23:14:23 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:23 + set +o hashall 23:14:23 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:23 + set +o interactive-comments 23:14:23 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:23 + set +o xtrace 23:14:23 ++ echo hxB 23:14:23 ++ sed 's/./& /g' 23:14:23 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:14:23 + set +h 23:14:23 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:14:23 + set +x 23:14:23 + docker_stats 23:14:23 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 23:14:23 ++ uname -s 23:14:23 + '[' Linux == Darwin ']' 23:14:23 + sh -c 'top -bn1 | head -3' 23:14:23 top - 23:14:23 up 4 min, 0 users, load average: 2.55, 1.08, 0.43 23:14:23 Tasks: 208 total, 1 running, 131 sleeping, 0 stopped, 0 zombie 23:14:23 %Cpu(s): 14.8 us, 3.0 sy, 0.0 ni, 79.0 id, 3.1 wa, 0.0 hi, 0.1 si, 0.1 st 23:14:23 + echo 23:14:23 + sh -c 'free -h' 23:14:23 23:14:23 total used free shared buff/cache available 23:14:23 Mem: 31G 2.7G 22G 1.3M 6.4G 28G 23:14:23 Swap: 1.0G 0B 1.0G 23:14:23 + echo 23:14:23 23:14:23 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:14:23 NAMES STATUS 23:14:23 policy-apex-pdp Up 30 seconds 23:14:23 policy-pap Up 31 seconds 23:14:23 policy-api Up 32 seconds 23:14:23 grafana Up 34 seconds 23:14:23 kafka Up 36 seconds 23:14:23 mariadb Up 39 seconds 23:14:23 simulator Up 38 seconds 23:14:23 compose_zookeeper_1 Up 37 seconds 23:14:23 prometheus Up 35 seconds 23:14:23 + echo 23:14:23 23:14:23 + docker stats --no-stream 23:14:26 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:14:26 779c81be093f policy-apex-pdp 1.49% 194.2MiB / 31.41GiB 0.60% 6.98kB / 6.77kB 0B / 0B 48 23:14:26 f0db664a443a policy-pap 3.14% 563.2MiB / 31.41GiB 1.75% 28.3kB / 30.2kB 0B / 153MB 61 23:14:26 1c2bd153d208 policy-api 0.14% 497.5MiB / 31.41GiB 1.55% 999kB / 710kB 0B / 0B 54 23:14:26 f00e99419f43 grafana 0.29% 53.1MiB / 31.41GiB 0.17% 18.5kB / 3.38kB 0B / 24.9MB 15 23:14:26 4ceeac07ec8e kafka 0.61% 376.2MiB / 31.41GiB 1.17% 70.6kB / 72.7kB 0B / 475kB 83 23:14:26 6036c5abe3ed mariadb 0.02% 102.1MiB / 31.41GiB 0.32% 995kB / 1.19MB 10.9MB / 71.6MB 37 23:14:26 a0033526e784 simulator 0.09% 123.3MiB / 31.41GiB 0.38% 1.36kB / 0B 225kB / 0B 76 23:14:26 9d802bffc7ba compose_zookeeper_1 0.17% 98.43MiB / 31.41GiB 0.31% 56.1kB / 50.4kB 0B / 356kB 61 23:14:26 9b867a9bea16 prometheus 0.00% 18.23MiB / 31.41GiB 0.06% 1.37kB / 158B 0B / 0B 12 23:14:26 + echo 23:14:26 23:14:26 + cd /tmp/tmp.Xn1lruRwEW 23:14:26 + echo 'Reading the testplan:' 23:14:26 Reading the testplan: 23:14:26 + echo 'pap-test.robot 23:14:26 pap-slas.robot' 23:14:26 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 23:14:26 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 23:14:26 + cat testplan.txt 23:14:26 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 23:14:26 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:14:26 ++ xargs 23:14:26 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 23:14:26 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:26 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:14:26 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:26 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:14:26 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 23:14:26 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 23:14:26 + relax_set 23:14:26 + set +e 23:14:26 + set +o pipefail 23:14:26 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:14:26 ============================================================================== 23:14:26 pap 23:14:26 ============================================================================== 23:14:26 pap.Pap-Test 23:14:26 ============================================================================== 23:14:27 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 23:14:27 ------------------------------------------------------------------------------ 23:14:28 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 23:14:28 ------------------------------------------------------------------------------ 23:14:28 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 23:14:28 ------------------------------------------------------------------------------ 23:14:28 Healthcheck :: Verify policy pap health check | PASS | 23:14:28 ------------------------------------------------------------------------------ 23:14:49 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 23:14:49 ------------------------------------------------------------------------------ 23:14:49 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 23:14:49 ------------------------------------------------------------------------------ 23:14:50 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 23:14:50 ------------------------------------------------------------------------------ 23:14:50 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 23:14:50 ------------------------------------------------------------------------------ 23:14:50 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 23:14:50 ------------------------------------------------------------------------------ 23:14:50 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 23:14:50 ------------------------------------------------------------------------------ 23:14:51 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 23:14:51 ------------------------------------------------------------------------------ 23:14:51 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 23:14:51 ------------------------------------------------------------------------------ 23:14:51 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 23:14:51 ------------------------------------------------------------------------------ 23:14:51 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 23:14:51 ------------------------------------------------------------------------------ 23:14:51 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 23:14:51 ------------------------------------------------------------------------------ 23:14:52 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 23:14:52 ------------------------------------------------------------------------------ 23:14:52 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 23:14:52 ------------------------------------------------------------------------------ 23:15:12 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | FAIL | 23:15:12 pdpTypeC != pdpTypeA 23:15:12 ------------------------------------------------------------------------------ 23:15:12 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 23:15:12 ------------------------------------------------------------------------------ 23:15:12 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 23:15:12 ------------------------------------------------------------------------------ 23:15:13 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 23:15:13 ------------------------------------------------------------------------------ 23:15:13 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 23:15:13 ------------------------------------------------------------------------------ 23:15:13 pap.Pap-Test | FAIL | 23:15:13 22 tests, 21 passed, 1 failed 23:15:13 ============================================================================== 23:15:13 pap.Pap-Slas 23:15:13 ============================================================================== 23:16:13 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 23:16:13 ------------------------------------------------------------------------------ 23:16:13 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 23:16:13 ------------------------------------------------------------------------------ 23:16:13 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 23:16:13 ------------------------------------------------------------------------------ 23:16:13 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 23:16:13 ------------------------------------------------------------------------------ 23:16:13 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 23:16:13 ------------------------------------------------------------------------------ 23:16:13 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 23:16:13 ------------------------------------------------------------------------------ 23:16:13 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 23:16:13 ------------------------------------------------------------------------------ 23:16:13 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 23:16:13 ------------------------------------------------------------------------------ 23:16:13 pap.Pap-Slas | PASS | 23:16:13 8 tests, 8 passed, 0 failed 23:16:13 ============================================================================== 23:16:13 pap | FAIL | 23:16:13 30 tests, 29 passed, 1 failed 23:16:13 ============================================================================== 23:16:13 Output: /tmp/tmp.Xn1lruRwEW/output.xml 23:16:13 Log: /tmp/tmp.Xn1lruRwEW/log.html 23:16:13 Report: /tmp/tmp.Xn1lruRwEW/report.html 23:16:13 + RESULT=1 23:16:13 + load_set 23:16:13 + _setopts=hxB 23:16:13 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:16:13 ++ tr : ' ' 23:16:13 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:13 + set +o braceexpand 23:16:13 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:13 + set +o hashall 23:16:13 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:13 + set +o interactive-comments 23:16:13 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:13 + set +o xtrace 23:16:13 ++ echo hxB 23:16:13 ++ sed 's/./& /g' 23:16:13 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:13 + set +h 23:16:13 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:13 + set +x 23:16:13 + echo 'RESULT: 1' 23:16:13 RESULT: 1 23:16:13 + exit 1 23:16:13 + on_exit 23:16:13 + rc=1 23:16:13 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 23:16:13 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:13 NAMES STATUS 23:16:13 policy-apex-pdp Up 2 minutes 23:16:13 policy-pap Up 2 minutes 23:16:13 policy-api Up 2 minutes 23:16:13 grafana Up 2 minutes 23:16:13 kafka Up 2 minutes 23:16:13 mariadb Up 2 minutes 23:16:13 simulator Up 2 minutes 23:16:13 compose_zookeeper_1 Up 2 minutes 23:16:13 prometheus Up 2 minutes 23:16:13 + docker_stats 23:16:13 ++ uname -s 23:16:13 + '[' Linux == Darwin ']' 23:16:13 + sh -c 'top -bn1 | head -3' 23:16:13 top - 23:16:13 up 5 min, 0 users, load average: 0.68, 0.91, 0.44 23:16:13 Tasks: 197 total, 1 running, 129 sleeping, 0 stopped, 0 zombie 23:16:13 %Cpu(s): 11.6 us, 2.2 sy, 0.0 ni, 83.6 id, 2.4 wa, 0.0 hi, 0.1 si, 0.1 st 23:16:13 + echo 23:16:13 23:16:13 + sh -c 'free -h' 23:16:13 total used free shared buff/cache available 23:16:13 Mem: 31G 2.8G 22G 1.3M 6.4G 28G 23:16:13 Swap: 1.0G 0B 1.0G 23:16:13 + echo 23:16:13 23:16:13 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:13 NAMES STATUS 23:16:13 policy-apex-pdp Up 2 minutes 23:16:13 policy-pap Up 2 minutes 23:16:13 policy-api Up 2 minutes 23:16:13 grafana Up 2 minutes 23:16:13 kafka Up 2 minutes 23:16:13 mariadb Up 2 minutes 23:16:13 simulator Up 2 minutes 23:16:13 compose_zookeeper_1 Up 2 minutes 23:16:13 prometheus Up 2 minutes 23:16:13 + echo 23:16:13 23:16:13 + docker stats --no-stream 23:16:16 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:16:16 779c81be093f policy-apex-pdp 1.30% 186.1MiB / 31.41GiB 0.58% 56.3kB / 91.1kB 0B / 0B 52 23:16:16 f0db664a443a policy-pap 0.55% 537.8MiB / 31.41GiB 1.67% 2.33MB / 807kB 0B / 153MB 65 23:16:16 1c2bd153d208 policy-api 0.11% 561.6MiB / 31.41GiB 1.75% 2.49MB / 1.26MB 0B / 0B 57 23:16:16 f00e99419f43 grafana 0.03% 60.84MiB / 31.41GiB 0.19% 19.3kB / 4.33kB 0B / 24.9MB 15 23:16:16 4ceeac07ec8e kafka 9.94% 390.2MiB / 31.41GiB 1.21% 241kB / 215kB 0B / 573kB 85 23:16:16 6036c5abe3ed mariadb 0.01% 103.4MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 10.9MB / 71.9MB 28 23:16:16 a0033526e784 simulator 0.22% 123.5MiB / 31.41GiB 0.38% 1.67kB / 0B 225kB / 0B 78 23:16:16 9d802bffc7ba compose_zookeeper_1 0.07% 99.74MiB / 31.41GiB 0.31% 59kB / 52kB 0B / 356kB 61 23:16:16 9b867a9bea16 prometheus 0.00% 24.45MiB / 31.41GiB 0.08% 181kB / 11kB 0B / 0B 12 23:16:16 + echo 23:16:16 23:16:16 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:16 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 23:16:16 + relax_set 23:16:16 + set +e 23:16:16 + set +o pipefail 23:16:16 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:16 ++ echo 'Shut down started!' 23:16:16 Shut down started! 23:16:16 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:16:16 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:16:16 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:16:16 ++ source export-ports.sh 23:16:16 ++ source get-versions.sh 23:16:18 ++ echo 'Collecting logs from docker compose containers...' 23:16:18 Collecting logs from docker compose containers... 23:16:18 ++ docker-compose logs 23:16:20 ++ cat docker_compose.log 23:16:20 Attaching to policy-apex-pdp, policy-pap, policy-api, grafana, kafka, policy-db-migrator, mariadb, simulator, compose_zookeeper_1, prometheus 23:16:20 grafana | logger=settings t=2024-03-15T23:13:49.356409234Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2024-03-15T23:13:49Z 23:16:20 grafana | logger=settings t=2024-03-15T23:13:49.356733643Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 23:16:20 grafana | logger=settings t=2024-03-15T23:13:49.356751093Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 23:16:20 grafana | logger=settings t=2024-03-15T23:13:49.356755673Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 23:16:20 grafana | logger=settings t=2024-03-15T23:13:49.356760734Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 23:16:20 grafana | logger=settings t=2024-03-15T23:13:49.356763484Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 23:16:20 grafana | logger=settings t=2024-03-15T23:13:49.356766444Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 23:16:20 grafana | logger=settings t=2024-03-15T23:13:49.356769744Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 23:16:20 grafana | logger=settings t=2024-03-15T23:13:49.356773664Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 23:16:20 grafana | logger=settings t=2024-03-15T23:13:49.356779454Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 23:16:20 grafana | logger=settings t=2024-03-15T23:13:49.356782084Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 23:16:20 grafana | logger=settings t=2024-03-15T23:13:49.356789334Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 23:16:20 grafana | logger=settings t=2024-03-15T23:13:49.356792704Z level=info msg=Target target=[all] 23:16:20 grafana | logger=settings t=2024-03-15T23:13:49.35698344Z level=info msg="Path Home" path=/usr/share/grafana 23:16:20 grafana | logger=settings t=2024-03-15T23:13:49.35699686Z level=info msg="Path Data" path=/var/lib/grafana 23:16:20 grafana | logger=settings t=2024-03-15T23:13:49.3570047Z level=info msg="Path Logs" path=/var/log/grafana 23:16:20 grafana | logger=settings t=2024-03-15T23:13:49.357009261Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 23:16:20 grafana | logger=settings t=2024-03-15T23:13:49.357012991Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 23:16:20 grafana | logger=settings t=2024-03-15T23:13:49.357015941Z level=info msg="App mode production" 23:16:20 grafana | logger=sqlstore t=2024-03-15T23:13:49.35734746Z level=info msg="Connecting to DB" dbtype=sqlite3 23:16:20 grafana | logger=sqlstore t=2024-03-15T23:13:49.357375651Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.358184834Z level=info msg="Starting DB migrations" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.359280845Z level=info msg="Executing migration" id="create migration_log table" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.360222071Z level=info msg="Migration successfully executed" id="create migration_log table" duration=940.956µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.364634466Z level=info msg="Executing migration" id="create user table" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.365326446Z level=info msg="Migration successfully executed" id="create user table" duration=691.78µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.371245203Z level=info msg="Executing migration" id="add unique index user.login" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.37255481Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.309027ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.377452698Z level=info msg="Executing migration" id="add unique index user.email" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.378644172Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.190294ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.38247828Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.383602282Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.124402ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.388670065Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.389415416Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=744.941µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.392564495Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.395797206Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.233371ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.398794561Z level=info msg="Executing migration" id="create user table v2" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.399626755Z level=info msg="Migration successfully executed" id="create user table v2" duration=831.594µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.403998298Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.404744839Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=740.241µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.407938479Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.40867821Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=742.311µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.411921462Z level=info msg="Executing migration" id="copy data_source v1 to v2" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.412334244Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=412.342µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.417817509Z level=info msg="Executing migration" id="Drop old table user_v1" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.418705634Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=887.596µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.423659394Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.425465265Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.809021ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.431845695Z level=info msg="Executing migration" id="Update user table charset" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.431873096Z level=info msg="Migration successfully executed" id="Update user table charset" duration=28.401µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.436663361Z level=info msg="Executing migration" id="Add last_seen_at column to user" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.437933327Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.266926ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.446586201Z level=info msg="Executing migration" id="Add missing user data" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.446921401Z level=info msg="Migration successfully executed" id="Add missing user data" duration=334.94µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.450892433Z level=info msg="Executing migration" id="Add is_disabled column to user" 23:16:20 zookeeper_1 | ===> User 23:16:20 zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:20 zookeeper_1 | ===> Configuring ... 23:16:20 zookeeper_1 | ===> Running preflight checks ... 23:16:20 zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... 23:16:20 zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... 23:16:20 zookeeper_1 | ===> Launching ... 23:16:20 zookeeper_1 | ===> Launching zookeeper ... 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,958] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,965] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,966] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,966] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,966] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,967] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,967] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,967] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,967] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,969] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,969] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,970] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,970] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,970] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,970] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,970] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,981] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@26275bef (org.apache.zookeeper.server.ServerMetrics) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,983] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,984] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,986] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,995] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 policy-db-migrator | Waiting for mariadb port 3306... 23:16:20 policy-db-migrator | Connection to mariadb (172.17.0.5) 3306 port [tcp/mysql] succeeded! 23:16:20 policy-db-migrator | 321 blocks 23:16:20 policy-db-migrator | Preparing upgrade release version: 0800 23:16:20 policy-db-migrator | Preparing upgrade release version: 0900 23:16:20 policy-db-migrator | Preparing upgrade release version: 1000 23:16:20 policy-db-migrator | Preparing upgrade release version: 1100 23:16:20 policy-db-migrator | Preparing upgrade release version: 1200 23:16:20 policy-db-migrator | Preparing upgrade release version: 1300 23:16:20 policy-db-migrator | Done 23:16:20 policy-db-migrator | name version 23:16:20 policy-db-migrator | policyadmin 0 23:16:20 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 23:16:20 policy-db-migrator | upgrade: 0 -> 1300 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,995] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,995] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,995] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,995] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,995] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,995] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,995] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,995] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,996] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,997] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,997] INFO Server environment:host.name=9d802bffc7ba (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,997] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,997] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,997] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.452805087Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.912364ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.45786322Z level=info msg="Executing migration" id="Add index user.login/user.email" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.458694473Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=831.103µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.464022134Z level=info msg="Executing migration" id="Add is_service_account column to user" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.465264549Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.241745ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.469461668Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.477451083Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=7.989075ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.481605331Z level=info msg="Executing migration" id="Add uid column to user" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.482912948Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.305686ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.486599442Z level=info msg="Executing migration" id="Update uid column values for users" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.486910621Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=310.578µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.492462407Z level=info msg="Executing migration" id="Add unique index user_uid" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.493287991Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=825.094µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.497908761Z level=info msg="Executing migration" id="create temp user table v1-7" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.499311971Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.40293ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.504462756Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.505180657Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=714.831µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.511377712Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.512556315Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.178023ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.520053917Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.521250751Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.196914ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.527159278Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.528529126Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.369298ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.534119874Z level=info msg="Executing migration" id="Update temp_user table charset" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.534157555Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=38.971µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.54034935Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.541481432Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.125512ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.547293626Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.548060638Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=767.182µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.551605758Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.55238435Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=778.652µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.559010988Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.560138469Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.127612ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.564887244Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.567997081Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.109508ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.572409416Z level=info msg="Executing migration" id="create temp_user v2" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.573501597Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.172103ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.581483152Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.583143179Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.665657ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.589660403Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.590529038Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=869.575µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.598343109Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.599662916Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.319107ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.603386751Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,997] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,997] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,997] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,997] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,997] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,997] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,997] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,998] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,998] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,998] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,998] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,998] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,998] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,998] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,998] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,998] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,998] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,998] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,999] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:49,999] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,000] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,001] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,001] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,002] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,002] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,003] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,003] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,003] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,003] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:20 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.604641707Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.255156ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.609443832Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.609958787Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=514.445µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.613236089Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.613893308Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=656.539µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.61715722Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.61785861Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=701.08µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.62174803Z level=info msg="Executing migration" id="create star table" 23:16:20 policy-apex-pdp | Waiting for mariadb port 3306... 23:16:20 policy-apex-pdp | mariadb (172.17.0.5:3306) open 23:16:20 policy-apex-pdp | Waiting for kafka port 9092... 23:16:20 policy-apex-pdp | kafka (172.17.0.7:9092) open 23:16:20 policy-apex-pdp | Waiting for pap port 6969... 23:16:20 policy-apex-pdp | pap (172.17.0.10:6969) open 23:16:20 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 23:16:20 policy-apex-pdp | [2024-03-15T23:14:22.942+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.165+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:20 policy-apex-pdp | allow.auto.create.topics = true 23:16:20 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:20 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:20 policy-apex-pdp | auto.offset.reset = latest 23:16:20 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:20 policy-apex-pdp | check.crcs = true 23:16:20 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:20 policy-apex-pdp | client.id = consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-1 23:16:20 policy-apex-pdp | client.rack = 23:16:20 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:20 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:20 policy-apex-pdp | enable.auto.commit = true 23:16:20 policy-apex-pdp | exclude.internal.topics = true 23:16:20 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:20 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:20 policy-apex-pdp | fetch.min.bytes = 1 23:16:20 policy-apex-pdp | group.id = 2f21b508-fe17-4ab8-9275-1762b58c9ac3 23:16:20 policy-apex-pdp | group.instance.id = null 23:16:20 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:20 policy-apex-pdp | interceptor.classes = [] 23:16:20 policy-apex-pdp | internal.leave.group.on.close = true 23:16:20 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:20 policy-apex-pdp | isolation.level = read_uncommitted 23:16:20 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:20 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:20 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:20 policy-apex-pdp | max.poll.records = 500 23:16:20 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:20 policy-apex-pdp | metric.reporters = [] 23:16:20 policy-apex-pdp | metrics.num.samples = 2 23:16:20 policy-apex-pdp | metrics.recording.level = INFO 23:16:20 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:20 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:20 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:20 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:20 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:20 policy-apex-pdp | request.timeout.ms = 30000 23:16:20 policy-apex-pdp | retry.backoff.ms = 100 23:16:20 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:20 policy-apex-pdp | sasl.jaas.config = null 23:16:20 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:20 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:20 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:20 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:20 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:20 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:20 policy-apex-pdp | sasl.login.class = null 23:16:20 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:20 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:20 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:20 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.622961164Z level=info msg="Migration successfully executed" id="create star table" duration=1.212914ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.627717119Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.628555242Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=841.084µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.631923507Z level=info msg="Executing migration" id="create org table v1" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.632798642Z level=info msg="Migration successfully executed" id="create org table v1" duration=874.755µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.636540818Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.637792803Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.251595ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.642431214Z level=info msg="Executing migration" id="create org_user table v1" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.643190106Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=758.262µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.654187436Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.655504404Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.316968ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.659341532Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.660724831Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.383099ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.66492221Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.666493524Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.571124ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.671158966Z level=info msg="Executing migration" id="Update org table charset" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.671277979Z level=info msg="Migration successfully executed" id="Update org table charset" duration=115.373µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.674468679Z level=info msg="Executing migration" id="Update org_user table charset" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.674597143Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=129.104µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.677842345Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.678280057Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=438.112µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.681864628Z level=info msg="Executing migration" id="create dashboard table" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.683169915Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.305457ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.687651612Z level=info msg="Executing migration" id="add index dashboard.account_id" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.688547177Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=895.305µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.691942203Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.69288657Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=942.777µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.696668757Z level=info msg="Executing migration" id="create dashboard_tag table" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.6978639Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.194673ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.701293837Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.702176912Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=888.855µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.706514245Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.707288797Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=774.692µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.710772485Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.7159074Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.134375ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.72791989Z level=info msg="Executing migration" id="create dashboard v2" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.728897317Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=980.358µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.73290287Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.733520348Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=617.428µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.736967755Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.737592903Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=624.768µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.740768733Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.741090302Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=321.15µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.745358542Z level=info msg="Executing migration" id="drop table dashboard_v1" 23:16:20 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:20 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.746035081Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=677.289µs 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,003] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:20 mariadb | 2024-03-15 23:13:44+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:20 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.749378276Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 23:16:20 kafka | ===> User 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,003] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:20 mariadb | 2024-03-15 23:13:44+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 23:16:20 policy-api | Waiting for mariadb port 3306... 23:16:20 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.74951588Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=138.304µs 23:16:20 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,005] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,006] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 mariadb | 2024-03-15 23:13:44+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:20 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:20 policy-api | mariadb (172.17.0.5:3306) open 23:16:20 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:20 policy-db-migrator | 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.752821043Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 23:16:20 kafka | ===> Configuring ... 23:16:20 policy-pap | Waiting for mariadb port 3306... 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,006] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 23:16:20 mariadb | 2024-03-15 23:13:44+00:00 [Note] [Entrypoint]: Initializing database files 23:16:20 simulator | overriding logback.xml 23:16:20 policy-api | Waiting for policy-db-migrator port 6824... 23:16:20 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:20 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.754225533Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.40439ms 23:16:20 kafka | Running in Zookeeper mode... 23:16:20 policy-pap | mariadb (172.17.0.5:3306) open 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,006] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 23:16:20 mariadb | 2024-03-15 23:13:44 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:20 simulator | 2024-03-15 23:13:45,770 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:20 policy-api | policy-db-migrator (172.17.0.6:6824) open 23:16:20 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:20 policy-db-migrator | -------------- 23:16:20 prometheus | ts=2024-03-15T23:13:48.327Z caller=main.go:564 level=info msg="No time or size retention was set so using the default time retention" duration=15d 23:16:20 kafka | ===> Running preflight checks ... 23:16:20 policy-pap | Waiting for kafka port 9092... 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,006] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 mariadb | 2024-03-15 23:13:44 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:20 simulator | 2024-03-15 23:13:45,826 INFO org.onap.policy.models.simulators starting 23:16:20 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 23:16:20 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.758444002Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 23:16:20 policy-db-migrator | 23:16:20 prometheus | ts=2024-03-15T23:13:48.327Z caller=main.go:608 level=info msg="Starting Prometheus Server" mode=server version="(version=2.50.1, branch=HEAD, revision=8c9b0285360a0b6288d76214a75ce3025bce4050)" 23:16:20 kafka | ===> Check if /var/lib/kafka/data is writable ... 23:16:20 policy-pap | kafka (172.17.0.7:9092) open 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,027] INFO Logging initialized @502ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 23:16:20 mariadb | 2024-03-15 23:13:44 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:20 simulator | 2024-03-15 23:13:45,826 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 23:16:20 policy-api | 23:16:20 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.759742179Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.298056ms 23:16:20 policy-db-migrator | 23:16:20 prometheus | ts=2024-03-15T23:13:48.327Z caller=main.go:613 level=info build_context="(go=go1.21.7, platform=linux/amd64, user=root@6213bb3ee580, date=20240226-11:36:26, tags=netgo,builtinassets,stringlabels)" 23:16:20 kafka | ===> Check if Zookeeper is healthy ... 23:16:20 policy-pap | Waiting for api port 6969... 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,115] WARN o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 23:16:20 mariadb | 23:16:20 simulator | 2024-03-15 23:13:46,007 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 23:16:20 policy-api | . ____ _ __ _ _ 23:16:20 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.762952349Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 23:16:20 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 23:16:20 prometheus | ts=2024-03-15T23:13:48.327Z caller=main.go:614 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 23:16:20 kafka | SLF4J: Class path contains multiple SLF4J bindings. 23:16:20 policy-pap | api (172.17.0.9:6969) open 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,115] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 23:16:20 mariadb | 23:16:20 simulator | 2024-03-15 23:13:46,008 INFO org.onap.policy.models.simulators starting A&AI simulator 23:16:20 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:20 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.764277077Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.321878ms 23:16:20 policy-db-migrator | -------------- 23:16:20 prometheus | ts=2024-03-15T23:13:48.327Z caller=main.go:615 level=info fd_limits="(soft=1048576, hard=1048576)" 23:16:20 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 23:16:20 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,134] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) 23:16:20 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 23:16:20 simulator | 2024-03-15 23:13:46,114 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:20 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:20 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:20 prometheus | ts=2024-03-15T23:13:48.327Z caller=main.go:616 level=info vm_limits="(soft=unlimited, hard=unlimited)" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.767881718Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 23:16:20 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 23:16:20 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,168] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 23:16:20 mariadb | To do so, start the server, then issue the following command: 23:16:20 simulator | 2024-03-15 23:13:46,125 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:20 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:20 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:20 policy-db-migrator | -------------- 23:16:20 prometheus | ts=2024-03-15T23:13:48.336Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.768492046Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=609.728µs 23:16:20 kafka | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 23:16:20 policy-pap | 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,168] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 23:16:20 mariadb | 23:16:20 simulator | 2024-03-15 23:13:46,128 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:20 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:20 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:20 policy-db-migrator | 23:16:20 prometheus | ts=2024-03-15T23:13:48.337Z caller=main.go:1118 level=info msg="Starting TSDB ..." 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.772662494Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 23:16:20 kafka | SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] 23:16:20 policy-pap | . ____ _ __ _ _ 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,170] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) 23:16:20 mariadb | '/usr/bin/mysql_secure_installation' 23:16:20 simulator | 2024-03-15 23:13:46,134 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:20 policy-api | =========|_|==============|___/=/_/_/_/ 23:16:20 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:20 policy-db-migrator | 23:16:20 prometheus | ts=2024-03-15T23:13:48.342Z caller=head.go:610 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.774556317Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.893864ms 23:16:20 kafka | [2024-03-15 23:13:51,514] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:16:20 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,173] WARN ServletContext@o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 23:16:20 mariadb | 23:16:20 simulator | 2024-03-15 23:13:46,190 INFO Session workerName=node0 23:16:20 policy-api | :: Spring Boot :: (v3.1.8) 23:16:20 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:20 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 23:16:20 prometheus | ts=2024-03-15T23:13:48.342Z caller=head.go:692 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.06µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.778056256Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 23:16:20 kafka | [2024-03-15 23:13:51,515] INFO Client environment:host.name=4ceeac07ec8e (org.apache.zookeeper.ZooKeeper) 23:16:20 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,181] INFO Started o.e.j.s.ServletContextHandler@5be1d0a4{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 23:16:20 mariadb | which will also give you the option of removing the test 23:16:20 simulator | 2024-03-15 23:13:46,707 INFO Using GSON for REST calls 23:16:20 policy-api | 23:16:20 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:20 policy-db-migrator | -------------- 23:16:20 prometheus | ts=2024-03-15T23:13:48.342Z caller=head.go:700 level=info component=tsdb msg="Replaying WAL, this may take a while" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.77889733Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=840.514µs 23:16:20 kafka | [2024-03-15 23:13:51,515] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 23:16:20 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,195] INFO Started ServerConnector@4f32a3ad{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 23:16:20 mariadb | databases and anonymous user created by default. This is 23:16:20 simulator | 2024-03-15 23:13:46,807 INFO Started o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE} 23:16:20 policy-api | [2024-03-15T23:13:58.629+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 16 (/app/api.jar started by policy in /opt/app/policy/api/bin) 23:16:20 policy-apex-pdp | security.providers = null 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:20 prometheus | ts=2024-03-15T23:13:48.342Z caller=head.go:771 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.782417229Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 23:16:20 kafka | [2024-03-15 23:13:51,515] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:20 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,195] INFO Started @670ms (org.eclipse.jetty.server.Server) 23:16:20 mariadb | strongly recommended for production servers. 23:16:20 simulator | 2024-03-15 23:13:46,816 INFO Started A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 23:16:20 policy-api | [2024-03-15T23:13:58.630+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:20 policy-apex-pdp | send.buffer.bytes = 131072 23:16:20 policy-db-migrator | -------------- 23:16:20 prometheus | ts=2024-03-15T23:13:48.342Z caller=head.go:808 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=31.161µs wal_replay_duration=416.304µs wbl_replay_duration=190ns total_replay_duration=475.286µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.783227292Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=809.833µs 23:16:20 kafka | [2024-03-15 23:13:51,515] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:20 policy-pap | =========|_|==============|___/=/_/_/_/ 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,195] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 23:16:20 mariadb | 23:16:20 simulator | 2024-03-15 23:13:46,825 INFO Started Server@45905bff{STARTING}[11.0.20,sto=0] @1540ms 23:16:20 policy-api | [2024-03-15T23:14:00.441+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:20 policy-apex-pdp | session.timeout.ms = 45000 23:16:20 policy-db-migrator | 23:16:20 prometheus | ts=2024-03-15T23:13:48.342Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.787361219Z level=info msg="Executing migration" id="Update dashboard table charset" 23:16:20 policy-pap | :: Spring Boot :: (v3.1.8) 23:16:20 kafka | [2024-03-15 23:13:51,515] INFO Client environment:java.class.path=/usr/share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/share/java/kafka/jersey-common-2.39.1.jar:/usr/share/java/kafka/swagger-annotations-2.2.8.jar:/usr/share/java/kafka/jose4j-0.9.3.jar:/usr/share/java/kafka/commons-validator-1.7.jar:/usr/share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/share/java/kafka/rocksdbjni-7.9.2.jar:/usr/share/java/kafka/jackson-annotations-2.13.5.jar:/usr/share/java/kafka/commons-io-2.11.0.jar:/usr/share/java/kafka/javax.activation-api-1.2.0.jar:/usr/share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/share/java/kafka/commons-cli-1.4.jar:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/share/java/kafka/scala-reflect-2.13.11.jar:/usr/share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/share/java/kafka/jline-3.22.0.jar:/usr/share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/share/java/kafka/hk2-api-2.6.1.jar:/usr/share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/share/java/kafka/kafka.jar:/usr/share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/share/java/kafka/scala-library-2.13.11.jar:/usr/share/java/kafka/jakarta.inject-2.6.1.jar:/usr/share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/share/java/kafka/hk2-locator-2.6.1.jar:/usr/share/java/kafka/reflections-0.10.2.jar:/usr/share/java/kafka/slf4j-api-1.7.36.jar:/usr/share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/share/java/kafka/paranamer-2.8.jar:/usr/share/java/kafka/commons-beanutils-1.9.4.jar:/usr/share/java/kafka/jaxb-api-2.3.1.jar:/usr/share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/share/java/kafka/hk2-utils-2.6.1.jar:/usr/share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/share/java/kafka/reload4j-1.2.25.jar:/usr/share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/share/java/kafka/jackson-core-2.13.5.jar:/usr/share/java/kafka/jersey-hk2-2.39.1.jar:/usr/share/java/kafka/jackson-databind-2.13.5.jar:/usr/share/java/kafka/jersey-client-2.39.1.jar:/usr/share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/share/java/kafka/commons-digester-2.1.jar:/usr/share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/share/java/kafka/argparse4j-0.7.0.jar:/usr/share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/kafka/audience-annotations-0.12.0.jar:/usr/share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/kafka/maven-artifact-3.8.8.jar:/usr/share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/share/java/kafka/jersey-server-2.39.1.jar:/usr/share/java/kafka/commons-lang3-3.8.1.jar:/usr/share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/share/java/kafka/jopt-simple-5.0.4.jar:/usr/share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/share/java/kafka/lz4-java-1.8.0.jar:/usr/share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/share/java/kafka/checker-qual-3.19.0.jar:/usr/share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/share/java/kafka/pcollections-4.0.1.jar:/usr/share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/share/java/kafka/commons-logging-1.2.jar:/usr/share/java/kafka/jsr305-3.0.2.jar:/usr/share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/kafka/metrics-core-2.2.0.jar:/usr/share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/share/java/kafka/commons-collections-3.2.2.jar:/usr/share/java/kafka/javassist-3.29.2-GA.jar:/usr/share/java/kafka/caffeine-2.9.3.jar:/usr/share/java/kafka/plexus-utils-3.3.1.jar:/usr/share/java/kafka/zookeeper-3.8.3.jar:/usr/share/java/kafka/activation-1.1.1.jar:/usr/share/java/kafka/netty-common-4.1.100.Final.jar:/usr/share/java/kafka/metrics-core-4.1.12.1.jar:/usr/share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/share/java/kafka/snappy-java-1.1.10.5.jar:/usr/share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/common-utils-7.6.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/utility-belt-7.6.0.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,199] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:20 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 23:16:20 simulator | 2024-03-15 23:13:46,825 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4302 ms. 23:16:20 policy-api | [2024-03-15T23:14:00.534+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 84 ms. Found 6 JPA repository interfaces. 23:16:20 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:20 policy-db-migrator | 23:16:20 prometheus | ts=2024-03-15T23:13:48.348Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.787485072Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=123.993µs 23:16:20 policy-pap | 23:16:20 kafka | [2024-03-15 23:13:51,515] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,200] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:20 mariadb | 23:16:20 simulator | 2024-03-15 23:13:46,835 INFO org.onap.policy.models.simulators starting SDNC simulator 23:16:20 policy-api | [2024-03-15T23:14:00.959+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:20 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:20 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 23:16:20 prometheus | ts=2024-03-15T23:13:48.350Z caller=main.go:1139 level=info fs_type=EXT4_SUPER_MAGIC 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.790180268Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 23:16:20 policy-pap | [2024-03-15T23:14:11.924+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 29 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 23:16:20 kafka | [2024-03-15 23:13:51,515] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,202] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:20 mariadb | Please report any problems at https://mariadb.org/jira 23:16:20 simulator | 2024-03-15 23:13:46,839 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:20 policy-api | [2024-03-15T23:14:00.959+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:20 policy-apex-pdp | ssl.cipher.suites = null 23:16:20 policy-db-migrator | -------------- 23:16:20 prometheus | ts=2024-03-15T23:13:48.350Z caller=main.go:1142 level=info msg="TSDB started" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.790293652Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=112.724µs 23:16:20 policy-pap | [2024-03-15T23:14:11.925+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:20 kafka | [2024-03-15 23:13:51,515] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,204] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:20 mariadb | 23:16:20 simulator | 2024-03-15 23:13:46,839 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:20 policy-api | [2024-03-15T23:14:01.653+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:20 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 23:16:20 prometheus | ts=2024-03-15T23:13:48.350Z caller=main.go:1324 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.794067098Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 23:16:20 policy-pap | [2024-03-15T23:14:13.936+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:20 kafka | [2024-03-15 23:13:51,515] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,219] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:20 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 23:16:20 simulator | 2024-03-15 23:13:46,840 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:20 policy-api | [2024-03-15T23:14:01.665+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:20 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:20 policy-db-migrator | -------------- 23:16:20 prometheus | ts=2024-03-15T23:13:48.351Z caller=main.go:1361 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=893.839µs db_storage=1.19µs remote_storage=1.4µs web_handler=700ns query_engine=1.14µs scrape=274.349µs scrape_sd=101.143µs notify=31.371µs notify_sd=24.261µs rules=1.67µs tracing=4.41µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.797295289Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.228761ms 23:16:20 policy-pap | [2024-03-15T23:14:14.063+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 116 ms. Found 7 JPA repository interfaces. 23:16:20 kafka | [2024-03-15 23:13:51,515] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,219] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:20 mariadb | 23:16:20 simulator | 2024-03-15 23:13:46,841 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:20 policy-api | [2024-03-15T23:14:01.668+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:20 policy-apex-pdp | ssl.engine.factory.class = null 23:16:20 policy-db-migrator | 23:16:20 prometheus | ts=2024-03-15T23:13:48.351Z caller=main.go:1103 level=info msg="Server is ready to receive web requests." 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.80260551Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 23:16:20 policy-pap | [2024-03-15T23:14:14.453+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:20 kafka | [2024-03-15 23:13:51,515] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,220] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 23:16:20 mariadb | Consider joining MariaDB's strong and vibrant community: 23:16:20 simulator | 2024-03-15 23:13:46,855 INFO Session workerName=node0 23:16:20 policy-api | [2024-03-15T23:14:01.668+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 23:16:20 policy-apex-pdp | ssl.key.password = null 23:16:20 policy-db-migrator | 23:16:20 prometheus | ts=2024-03-15T23:13:48.351Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.804607506Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.020457ms 23:16:20 policy-pap | [2024-03-15T23:14:14.453+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:20 kafka | [2024-03-15 23:13:51,515] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,221] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 23:16:20 mariadb | https://mariadb.org/get-involved/ 23:16:20 simulator | 2024-03-15 23:13:46,923 INFO Using GSON for REST calls 23:16:20 policy-api | [2024-03-15T23:14:01.763+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:20 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:20 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.807541149Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 23:16:20 policy-pap | [2024-03-15T23:14:15.144+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:20 kafka | [2024-03-15 23:13:51,515] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,225] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 23:16:20 mariadb | 23:16:20 simulator | 2024-03-15 23:13:46,934 INFO Started o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE} 23:16:20 policy-api | [2024-03-15T23:14:01.763+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3067 ms 23:16:20 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.809415602Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.874163ms 23:16:20 policy-pap | [2024-03-15T23:14:15.155+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:20 kafka | [2024-03-15 23:13:51,515] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,225] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:20 mariadb | 2024-03-15 23:13:45+00:00 [Note] [Entrypoint]: Database files initialized 23:16:20 simulator | 2024-03-15 23:13:46,936 INFO Started SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 23:16:20 policy-api | [2024-03-15T23:14:02.210+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:20 policy-apex-pdp | ssl.keystore.key = null 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.813558129Z level=info msg="Executing migration" id="Add column uid in dashboard" 23:16:20 policy-pap | [2024-03-15T23:14:15.157+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:20 kafka | [2024-03-15 23:13:51,515] INFO Client environment:os.memory.free=487MB (org.apache.zookeeper.ZooKeeper) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,228] INFO Snapshot loaded in 8 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 23:16:20 mariadb | 2024-03-15 23:13:45+00:00 [Note] [Entrypoint]: Starting temporary server 23:16:20 simulator | 2024-03-15 23:13:46,936 INFO Started Server@45e37a7e{STARTING}[11.0.20,sto=0] @1651ms 23:16:20 policy-api | [2024-03-15T23:14:02.298+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 23:16:20 policy-apex-pdp | ssl.keystore.location = null 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.815391321Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.831252ms 23:16:20 policy-pap | [2024-03-15T23:14:15.157+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 23:16:20 kafka | [2024-03-15 23:13:51,515] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,229] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:20 mariadb | 2024-03-15 23:13:45+00:00 [Note] [Entrypoint]: Waiting for server startup 23:16:20 policy-api | [2024-03-15T23:14:02.301+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 23:16:20 simulator | 2024-03-15 23:13:46,936 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4904 ms. 23:16:20 policy-apex-pdp | ssl.keystore.password = null 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.818086477Z level=info msg="Executing migration" id="Update uid column values in dashboard" 23:16:20 policy-pap | [2024-03-15T23:14:15.257+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:20 kafka | [2024-03-15 23:13:51,515] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,229] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:20 mariadb | 2024-03-15 23:13:45 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 99 ... 23:16:20 policy-api | [2024-03-15T23:14:02.351+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:20 simulator | 2024-03-15 23:13:46,938 INFO org.onap.policy.models.simulators starting SO simulator 23:16:20 policy-apex-pdp | ssl.keystore.type = JKS 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.818288283Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=201.896µs 23:16:20 policy-pap | [2024-03-15T23:14:15.257+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3249 ms 23:16:20 kafka | [2024-03-15 23:13:51,518] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@184cf7cf (org.apache.zookeeper.ZooKeeper) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,239] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 23:16:20 mariadb | 2024-03-15 23:13:45 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:20 policy-api | [2024-03-15T23:14:02.716+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:20 simulator | 2024-03-15 23:13:46,941 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:20 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:20 policy-db-migrator | > upgrade 0450-pdpgroup.sql 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.82102637Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 23:16:20 policy-pap | [2024-03-15T23:14:15.686+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:20 kafka | [2024-03-15 23:13:51,522] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,239] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 23:16:20 mariadb | 2024-03-15 23:13:45 0 [Note] InnoDB: Number of transaction pools: 1 23:16:20 policy-api | [2024-03-15T23:14:02.736+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:20 simulator | 2024-03-15 23:13:46,942 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:20 policy-apex-pdp | ssl.provider = null 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.821789932Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=762.911µs 23:16:20 policy-pap | [2024-03-15T23:14:15.772+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 23:16:20 kafka | [2024-03-15 23:13:51,527] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,252] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 23:16:20 mariadb | 2024-03-15 23:13:45 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:20 policy-api | [2024-03-15T23:14:02.842+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@63b3ee82 23:16:20 simulator | 2024-03-15 23:13:46,944 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:20 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.826201766Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 23:16:20 policy-pap | [2024-03-15T23:14:15.775+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 23:16:20 kafka | [2024-03-15 23:13:51,535] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:20 zookeeper_1 | [2024-03-15 23:13:50,253] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 23:16:20 mariadb | 2024-03-15 23:13:45 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:20 policy-api | [2024-03-15T23:14:02.845+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:20 simulator | 2024-03-15 23:13:46,947 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:20 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.826889376Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=684.389µs 23:16:20 policy-pap | [2024-03-15T23:14:15.814+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:20 kafka | [2024-03-15 23:13:51,562] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 23:16:20 zookeeper_1 | [2024-03-15 23:13:51,591] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 23:16:20 mariadb | 2024-03-15 23:13:45 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:20 policy-api | [2024-03-15T23:14:04.791+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:20 simulator | 2024-03-15 23:13:46,949 INFO Session workerName=node0 23:16:20 policy-apex-pdp | ssl.truststore.certificates = null 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.829852159Z level=info msg="Executing migration" id="Update dashboard title length" 23:16:20 policy-pap | [2024-03-15T23:14:16.158+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:20 kafka | [2024-03-15 23:13:51,563] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 23:16:20 mariadb | 2024-03-15 23:13:45 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:20 policy-api | [2024-03-15T23:14:04.794+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:20 simulator | 2024-03-15 23:13:46,997 INFO Using GSON for REST calls 23:16:20 policy-apex-pdp | ssl.truststore.location = null 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.82988013Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=28.731µs 23:16:20 policy-pap | [2024-03-15T23:14:16.177+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:20 kafka | [2024-03-15 23:13:51,572] INFO Socket connection established, initiating session, client: /172.17.0.7:44428, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 23:16:20 mariadb | 2024-03-15 23:13:45 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:20 policy-api | [2024-03-15T23:14:05.958+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 23:16:20 simulator | 2024-03-15 23:13:47,009 INFO Started o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE} 23:16:20 policy-apex-pdp | ssl.truststore.password = null 23:16:20 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.83269736Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 23:16:20 policy-pap | [2024-03-15T23:14:16.289+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@7b6e5c12 23:16:20 kafka | [2024-03-15 23:13:51,613] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x10000034dc50000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 23:16:20 mariadb | 2024-03-15 23:13:45 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:20 policy-api | [2024-03-15T23:14:06.826+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 23:16:20 simulator | 2024-03-15 23:13:47,011 INFO Started SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 23:16:20 policy-apex-pdp | ssl.truststore.type = JKS 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.833655807Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=957.997µs 23:16:20 policy-pap | [2024-03-15T23:14:16.291+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:20 kafka | [2024-03-15 23:13:51,744] INFO EventThread shut down for session: 0x10000034dc50000 (org.apache.zookeeper.ClientCnxn) 23:16:20 mariadb | 2024-03-15 23:13:45 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:20 policy-api | [2024-03-15T23:14:08.034+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:20 simulator | 2024-03-15 23:13:47,012 INFO Started Server@7516e4e5{STARTING}[11.0.20,sto=0] @1727ms 23:16:20 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.83765826Z level=info msg="Executing migration" id="create dashboard_provisioning" 23:16:20 policy-pap | [2024-03-15T23:14:18.205+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:20 kafka | [2024-03-15 23:13:51,745] INFO Session: 0x10000034dc50000 closed (org.apache.zookeeper.ZooKeeper) 23:16:20 mariadb | 2024-03-15 23:13:45 0 [Note] InnoDB: 128 rollback segments are active. 23:16:20 policy-api | [2024-03-15T23:14:08.277+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@2f84848e, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@607c7f58, org.springframework.security.web.context.SecurityContextHolderFilter@7b3d759f, org.springframework.security.web.header.HeaderWriterFilter@15200332, org.springframework.security.web.authentication.logout.LogoutFilter@25e7e6d, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@4c66b3d9, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@62c4ad40, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@9bc10bd, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@4bbb00a4, org.springframework.security.web.access.ExceptionTranslationFilter@4529b266, org.springframework.security.web.access.intercept.AuthorizationFilter@3413effc] 23:16:20 simulator | 2024-03-15 23:13:47,012 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4931 ms. 23:16:20 policy-apex-pdp | 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.838449692Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=790.512µs 23:16:20 policy-pap | [2024-03-15T23:14:18.209+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:20 kafka | Using log4j config /etc/kafka/log4j.properties 23:16:20 mariadb | 2024-03-15 23:13:45 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:20 policy-api | [2024-03-15T23:14:09.215+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:20 simulator | 2024-03-15 23:13:47,014 INFO org.onap.policy.models.simulators starting VFC simulator 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.335+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.841281832Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 23:16:20 policy-pap | [2024-03-15T23:14:18.751+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 23:16:20 kafka | ===> Launching ... 23:16:20 mariadb | 2024-03-15 23:13:45 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:20 policy-api | [2024-03-15T23:14:09.325+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:20 simulator | 2024-03-15 23:13:47,017 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.336+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.846760387Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.477765ms 23:16:20 policy-pap | [2024-03-15T23:14:19.126+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 23:16:20 kafka | ===> Launching kafka ... 23:16:20 mariadb | 2024-03-15 23:13:45 0 [Note] InnoDB: log sequence number 45452; transaction id 14 23:16:20 policy-api | [2024-03-15T23:14:09.364+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 23:16:20 simulator | 2024-03-15 23:13:47,017 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.336+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710544463333 23:16:20 policy-db-migrator | > upgrade 0470-pdp.sql 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.850558684Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 23:16:20 policy-pap | [2024-03-15T23:14:19.240+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 23:16:20 kafka | [2024-03-15 23:13:52,506] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 23:16:20 mariadb | 2024-03-15 23:13:45 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:20 policy-api | [2024-03-15T23:14:09.383+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.519 seconds (process running for 12.14) 23:16:20 simulator | 2024-03-15 23:13:47,020 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.338+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-1, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Subscribed to topic(s): policy-pdp-pap 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.851299135Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=739.871µs 23:16:20 policy-pap | [2024-03-15T23:14:19.518+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:20 kafka | [2024-03-15 23:13:52,873] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:20 mariadb | 2024-03-15 23:13:45 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:20 policy-api | [2024-03-15T23:14:26.666+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:20 simulator | 2024-03-15 23:13:47,020 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.351+00:00|INFO|ServiceManager|main] service manager starting 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.854100744Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 23:16:20 policy-pap | allow.auto.create.topics = true 23:16:20 kafka | [2024-03-15 23:13:52,954] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 23:16:20 mariadb | 2024-03-15 23:13:45 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 23:16:20 policy-api | [2024-03-15T23:14:26.666+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 23:16:20 simulator | 2024-03-15 23:13:47,033 INFO Session workerName=node0 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.352+00:00|INFO|ServiceManager|main] service manager starting topics 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.854909677Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=810.693µs 23:16:20 policy-pap | auto.commit.interval.ms = 5000 23:16:20 kafka | [2024-03-15 23:13:52,956] INFO starting (kafka.server.KafkaServer) 23:16:20 mariadb | 2024-03-15 23:13:45 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 23:16:20 policy-api | [2024-03-15T23:14:26.668+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms 23:16:20 simulator | 2024-03-15 23:13:47,074 INFO Using GSON for REST calls 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.356+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=2f21b508-fe17-4ab8-9275-1762b58c9ac3, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.859344732Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 23:16:20 policy-pap | auto.include.jmx.reporter = true 23:16:20 kafka | [2024-03-15 23:13:52,956] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 23:16:20 mariadb | 2024-03-15 23:13:45 0 [Note] mariadbd: ready for connections. 23:16:20 policy-api | [2024-03-15T23:14:26.939+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: 23:16:20 simulator | 2024-03-15 23:13:47,082 INFO Started o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE} 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.377+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.860125495Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=780.282µs 23:16:20 policy-pap | auto.offset.reset = latest 23:16:20 kafka | [2024-03-15 23:13:52,970] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 23:16:20 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 23:16:20 policy-api | [] 23:16:20 simulator | 2024-03-15 23:13:47,083 INFO Started VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 23:16:20 policy-apex-pdp | allow.auto.create.topics = true 23:16:20 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.862900123Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 23:16:20 policy-pap | bootstrap.servers = [kafka:9092] 23:16:20 kafka | [2024-03-15 23:13:52,975] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:16:20 mariadb | 2024-03-15 23:13:46+00:00 [Note] [Entrypoint]: Temporary server started. 23:16:20 simulator | 2024-03-15 23:13:47,084 INFO Started Server@6f0b0a5e{STARTING}[11.0.20,sto=0] @1799ms 23:16:20 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.863194221Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=293.888µs 23:16:20 policy-pap | check.crcs = true 23:16:20 kafka | [2024-03-15 23:13:52,975] INFO Client environment:host.name=4ceeac07ec8e (org.apache.zookeeper.ZooKeeper) 23:16:20 mariadb | 2024-03-15 23:13:48+00:00 [Note] [Entrypoint]: Creating user policy_user 23:16:20 simulator | 2024-03-15 23:13:47,084 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4935 ms. 23:16:20 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.866007531Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 23:16:20 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:20 kafka | [2024-03-15 23:13:52,975] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 23:16:20 mariadb | 2024-03-15 23:13:48+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 23:16:20 simulator | 2024-03-15 23:13:47,085 INFO org.onap.policy.models.simulators started 23:16:20 policy-apex-pdp | auto.offset.reset = latest 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.866529615Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=521.604µs 23:16:20 policy-pap | client.id = consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-1 23:16:20 kafka | [2024-03-15 23:13:52,975] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:20 mariadb | 23:16:20 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.87093487Z level=info msg="Executing migration" id="Add check_sum column" 23:16:20 policy-pap | client.rack = 23:16:20 kafka | [2024-03-15 23:13:52,975] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:20 mariadb | 2024-03-15 23:13:48+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 23:16:20 policy-apex-pdp | check.crcs = true 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.872927546Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.992306ms 23:16:20 policy-pap | connections.max.idle.ms = 540000 23:16:20 policy-pap | default.api.timeout.ms = 60000 23:16:20 kafka | [2024-03-15 23:13:52,975] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 23:16:20 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:20 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.876001043Z level=info msg="Executing migration" id="Add index for dashboard_title" 23:16:20 mariadb | 23:16:20 policy-pap | enable.auto.commit = true 23:16:20 kafka | [2024-03-15 23:13:52,976] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:20 policy-apex-pdp | client.id = consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.876750884Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=749.661µs 23:16:20 mariadb | 2024-03-15 23:13:48+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 23:16:20 policy-pap | exclude.internal.topics = true 23:16:20 kafka | [2024-03-15 23:13:52,976] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:20 policy-apex-pdp | client.rack = 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.879489612Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 23:16:20 mariadb | #!/bin/bash -xv 23:16:20 policy-pap | fetch.max.bytes = 52428800 23:16:20 kafka | [2024-03-15 23:13:52,976] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.879658286Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=168.514µs 23:16:20 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:20 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 23:16:20 policy-pap | fetch.max.wait.ms = 500 23:16:20 kafka | [2024-03-15 23:13:52,976] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.883901776Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 23:16:20 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:20 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 23:16:20 policy-pap | fetch.min.bytes = 1 23:16:20 kafka | [2024-03-15 23:13:52,976] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.884069541Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=167.825µs 23:16:20 policy-apex-pdp | enable.auto.commit = true 23:16:20 mariadb | # 23:16:20 policy-pap | group.id = a833d76c-6968-4ee8-9b4d-b3fefbf07611 23:16:20 kafka | [2024-03-15 23:13:52,976] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:20 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.88720848Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 23:16:20 policy-apex-pdp | exclude.internal.topics = true 23:16:20 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 23:16:20 policy-pap | group.instance.id = null 23:16:20 kafka | [2024-03-15 23:13:52,976] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.888534587Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.325437ms 23:16:20 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:20 mariadb | # you may not use this file except in compliance with the License. 23:16:20 policy-pap | heartbeat.interval.ms = 3000 23:16:20 kafka | [2024-03-15 23:13:52,976] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.891936023Z level=info msg="Executing migration" id="Add isPublic for dashboard" 23:16:20 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:20 mariadb | # You may obtain a copy of the License at 23:16:20 policy-pap | interceptor.classes = [] 23:16:20 kafka | [2024-03-15 23:13:52,976] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.895577756Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.643153ms 23:16:20 policy-apex-pdp | fetch.min.bytes = 1 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | # 23:16:20 policy-pap | internal.leave.group.on.close = true 23:16:20 kafka | [2024-03-15 23:13:52,976] INFO Client environment:os.memory.free=1007MB (org.apache.zookeeper.ZooKeeper) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.900078153Z level=info msg="Executing migration" id="create data_source table" 23:16:20 policy-apex-pdp | group.id = 2f21b508-fe17-4ab8-9275-1762b58c9ac3 23:16:20 policy-db-migrator | 23:16:20 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 23:16:20 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:20 kafka | [2024-03-15 23:13:52,976] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.901218836Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.140362ms 23:16:20 policy-apex-pdp | group.instance.id = null 23:16:20 policy-db-migrator | 23:16:20 mariadb | # 23:16:20 policy-pap | isolation.level = read_uncommitted 23:16:20 kafka | [2024-03-15 23:13:52,976] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.906040772Z level=info msg="Executing migration" id="add index data_source.account_id" 23:16:20 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:20 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 23:16:20 mariadb | # Unless required by applicable law or agreed to in writing, software 23:16:20 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:20 kafka | [2024-03-15 23:13:52,978] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@1f6c9cd8 (org.apache.zookeeper.ZooKeeper) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.906613848Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=573.716µs 23:16:20 policy-apex-pdp | interceptor.classes = [] 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 23:16:20 policy-pap | max.partition.fetch.bytes = 1048576 23:16:20 kafka | [2024-03-15 23:13:52,981] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.909828469Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 23:16:20 policy-apex-pdp | internal.leave.group.on.close = true 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 23:16:20 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 23:16:20 policy-pap | max.poll.interval.ms = 300000 23:16:20 kafka | [2024-03-15 23:13:52,989] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.910386095Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=557.425µs 23:16:20 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | # See the License for the specific language governing permissions and 23:16:20 policy-pap | max.poll.records = 500 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.918979757Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 23:16:20 policy-apex-pdp | isolation.level = read_uncommitted 23:16:20 kafka | [2024-03-15 23:13:52,993] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 23:16:20 policy-db-migrator | 23:16:20 mariadb | # limitations under the License. 23:16:20 policy-pap | metadata.max.age.ms = 300000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.920540951Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.562094ms 23:16:20 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:20 kafka | [2024-03-15 23:13:52,995] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 23:16:20 policy-db-migrator | 23:16:20 mariadb | 23:16:20 policy-pap | metric.reporters = [] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.923859745Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 23:16:20 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:20 kafka | [2024-03-15 23:13:53,003] INFO Socket connection established, initiating session, client: /172.17.0.7:44430, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 23:16:20 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 23:16:20 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:20 policy-pap | metrics.num.samples = 2 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.92472914Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=868.625µs 23:16:20 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:20 kafka | [2024-03-15 23:13:53,010] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x10000034dc50001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | do 23:16:20 policy-pap | metrics.recording.level = INFO 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.928041953Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 23:16:20 policy-apex-pdp | max.poll.records = 500 23:16:20 kafka | [2024-03-15 23:13:53,013] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 23:16:20 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 23:16:20 policy-pap | metrics.sample.window.ms = 30000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.934189077Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.147524ms 23:16:20 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:20 kafka | [2024-03-15 23:13:53,343] INFO Cluster ID = LbZnmjPNTK-gKtiXPvevcA (kafka.server.KafkaServer) 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 23:16:20 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.938572241Z level=info msg="Executing migration" id="create data_source table v2" 23:16:20 policy-apex-pdp | metric.reporters = [] 23:16:20 kafka | [2024-03-15 23:13:53,347] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 23:16:20 policy-db-migrator | 23:16:20 mariadb | done 23:16:20 policy-pap | receive.buffer.bytes = 65536 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.939685312Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.112811ms 23:16:20 policy-apex-pdp | metrics.num.samples = 2 23:16:20 kafka | [2024-03-15 23:13:53,406] INFO KafkaConfig values: 23:16:20 policy-db-migrator | 23:16:20 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:20 policy-pap | reconnect.backoff.max.ms = 1000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.942820631Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 23:16:20 policy-apex-pdp | metrics.recording.level = INFO 23:16:20 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 23:16:20 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 23:16:20 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 23:16:20 policy-pap | reconnect.backoff.ms = 50 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.943819069Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=995.268µs 23:16:20 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:20 kafka | alter.config.policy.class.name = null 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:20 policy-pap | request.timeout.ms = 30000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.947004139Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 23:16:20 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:20 kafka | alter.log.dirs.replication.quota.window.num = 11 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:20 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:20 policy-pap | retry.backoff.ms = 100 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.948154692Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.149852ms 23:16:20 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:20 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 23:16:20 policy-pap | sasl.client.callback.handler.class = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.952913106Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 23:16:20 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:20 kafka | authorizer.class.name = 23:16:20 policy-db-migrator | 23:16:20 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:20 policy-pap | sasl.jaas.config = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.95339288Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=479.453µs 23:16:20 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:20 kafka | auto.create.topics.enable = true 23:16:20 policy-db-migrator | 23:16:20 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:20 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.956688243Z level=info msg="Executing migration" id="Add column with_credentials" 23:16:20 policy-apex-pdp | request.timeout.ms = 30000 23:16:20 kafka | auto.include.jmx.reporter = true 23:16:20 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 23:16:20 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 23:16:20 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.958880865Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.192452ms 23:16:20 policy-apex-pdp | retry.backoff.ms = 100 23:16:20 kafka | auto.leader.rebalance.enable = true 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:20 policy-pap | sasl.kerberos.service.name = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.962909008Z level=info msg="Executing migration" id="Add secure json data column" 23:16:20 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:20 kafka | background.threads = 10 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 23:16:20 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:20 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.965060939Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.151331ms 23:16:20 policy-apex-pdp | sasl.jaas.config = null 23:16:20 kafka | broker.heartbeat.interval.ms = 2000 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 23:16:20 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.970705919Z level=info msg="Executing migration" id="Update data_source table charset" 23:16:20 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:20 kafka | broker.id = 1 23:16:20 policy-db-migrator | 23:16:20 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:20 policy-pap | sasl.login.callback.handler.class = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.97073817Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=34.381µs 23:16:20 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:20 kafka | broker.id.generation.enable = true 23:16:20 policy-db-migrator | 23:16:20 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:20 policy-pap | sasl.login.class = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.973695283Z level=info msg="Executing migration" id="Update initial version to 1" 23:16:20 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:20 kafka | broker.rack = null 23:16:20 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 23:16:20 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 23:16:20 policy-pap | sasl.login.connect.timeout.ms = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.973920789Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=225.426µs 23:16:20 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:20 kafka | broker.session.timeout.ms = 9000 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:20 policy-pap | sasl.login.read.timeout.ms = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.977019977Z level=info msg="Executing migration" id="Add read_only data column" 23:16:20 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:20 kafka | client.quota.callback.class = null 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 23:16:20 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:20 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.97923766Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.218863ms 23:16:20 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:20 kafka | compression.type = producer 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 23:16:20 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.983794728Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 23:16:20 policy-apex-pdp | sasl.login.class = null 23:16:20 kafka | connection.failed.authentication.delay.ms = 100 23:16:20 policy-db-migrator | 23:16:20 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:20 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.983939063Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=144.534µs 23:16:20 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:20 kafka | connections.max.idle.ms = 600000 23:16:20 policy-db-migrator | 23:16:20 mariadb | 23:16:20 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.987841543Z level=info msg="Executing migration" id="Update json_data with nulls" 23:16:20 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:20 kafka | connections.max.reauth.ms = 0 23:16:20 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 23:16:20 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 23:16:20 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.987954246Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=112.493µs 23:16:20 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:20 kafka | control.plane.listener.name = null 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 23:16:20 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.990247721Z level=info msg="Executing migration" id="Add uid column" 23:16:20 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:20 kafka | controlled.shutdown.enable = true 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:20 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.992505675Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.257283ms 23:16:20 policy-pap | sasl.mechanism = GSSAPI 23:16:20 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:20 kafka | controlled.shutdown.max.retries = 3 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.995595282Z level=info msg="Executing migration" id="Update uid value" 23:16:20 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:20 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:20 kafka | controlled.shutdown.retry.backoff.ms = 5000 23:16:20 policy-db-migrator | 23:16:20 mariadb | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:49.995828988Z level=info msg="Migration successfully executed" id="Update uid value" duration=249.537µs 23:16:20 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:20 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:20 kafka | controller.listener.names = null 23:16:20 policy-db-migrator | 23:16:20 mariadb | 2024-03-15 23:13:49+00:00 [Note] [Entrypoint]: Stopping temporary server 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.000678195Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 23:16:20 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:20 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:20 kafka | controller.quorum.append.linger.ms = 25 23:16:20 policy-db-migrator | > upgrade 0570-toscadatatype.sql 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.001488058Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=808.923µs 23:16:20 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:20 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:20 kafka | controller.quorum.election.backoff.max.ms = 1000 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: FTS optimize thread exiting. 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.013065075Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 23:16:20 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:20 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:20 kafka | controller.quorum.election.timeout.ms = 1000 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Starting shutdown... 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.014000925Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=935.411µs 23:16:20 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:20 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:20 kafka | controller.quorum.fetch.timeout.ms = 2000 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.017165367Z level=info msg="Executing migration" id="create api_key table" 23:16:20 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:20 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:20 kafka | controller.quorum.request.timeout.ms = 2000 23:16:20 policy-db-migrator | 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Buffer pool(s) dump completed at 240315 23:13:49 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.018172229Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.006692ms 23:16:20 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:20 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:20 kafka | controller.quorum.retry.backoff.ms = 20 23:16:20 policy-db-migrator | 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.022727305Z level=info msg="Executing migration" id="add index api_key.account_id" 23:16:20 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:20 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:20 kafka | controller.quorum.voters = [] 23:16:20 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Shutdown completed; log sequence number 381724; transaction id 298 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.023681095Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=953.22µs 23:16:20 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:20 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:20 kafka | controller.quota.window.num = 11 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] mariadbd: Shutdown complete 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.026663211Z level=info msg="Executing migration" id="add index api_key.key" 23:16:20 policy-pap | security.protocol = PLAINTEXT 23:16:20 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:20 kafka | controller.quota.window.size.seconds = 1 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 23:16:20 mariadb | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.027341143Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=676.962µs 23:16:20 policy-pap | security.providers = null 23:16:20 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:20 kafka | controller.socket.timeout.ms = 30000 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | 2024-03-15 23:13:49+00:00 [Note] [Entrypoint]: Temporary server stopped 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.031044432Z level=info msg="Executing migration" id="add index api_key.account_id_name" 23:16:20 policy-pap | send.buffer.bytes = 131072 23:16:20 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:20 kafka | create.topic.policy.class.name = null 23:16:20 policy-db-migrator | 23:16:20 mariadb | 23:16:20 policy-pap | session.timeout.ms = 45000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.031791176Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=746.604µs 23:16:20 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:20 kafka | default.replication.factor = 1 23:16:20 policy-db-migrator | 23:16:20 mariadb | 2024-03-15 23:13:49+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 23:16:20 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.03564462Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 23:16:20 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:20 kafka | delegation.token.expiry.check.interval.ms = 3600000 23:16:20 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 23:16:20 mariadb | 23:16:20 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.036355893Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=711.853µs 23:16:20 policy-apex-pdp | security.providers = null 23:16:20 kafka | delegation.token.expiry.time.ms = 86400000 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 23:16:20 policy-pap | ssl.cipher.suites = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.040635191Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 23:16:20 policy-apex-pdp | send.buffer.bytes = 131072 23:16:20 kafka | delegation.token.master.key = null 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:20 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.042246373Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.613072ms 23:16:20 policy-apex-pdp | session.timeout.ms = 45000 23:16:20 kafka | delegation.token.max.lifetime.ms = 604800000 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Number of transaction pools: 1 23:16:20 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.048336129Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 23:16:20 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:20 kafka | delegation.token.secret.key = null 23:16:20 policy-db-migrator | 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:20 policy-pap | ssl.engine.factory.class = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.049238108Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=899.669µs 23:16:20 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:20 kafka | delete.records.purgatory.purge.interval.requests = 1 23:16:20 policy-db-migrator | 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:20 policy-pap | ssl.key.password = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.054033212Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 23:16:20 policy-apex-pdp | ssl.cipher.suites = null 23:16:20 kafka | delete.topic.enable = true 23:16:20 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:20 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.061361328Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=7.330916ms 23:16:20 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:20 kafka | early.start.listeners = null 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:20 policy-pap | ssl.keystore.certificate.chain = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.06452706Z level=info msg="Executing migration" id="create api_key table v2" 23:16:20 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:20 kafka | fetch.max.bytes = 57671680 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:20 policy-pap | ssl.keystore.key = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.065068757Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=543.577µs 23:16:20 policy-apex-pdp | ssl.engine.factory.class = null 23:16:20 kafka | fetch.purgatory.purge.interval.requests = 1000 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:20 policy-pap | ssl.keystore.location = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.069009774Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 23:16:20 policy-apex-pdp | ssl.key.password = null 23:16:20 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 23:16:20 policy-db-migrator | 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:20 policy-pap | ssl.keystore.password = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.069829841Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=816.036µs 23:16:20 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:20 kafka | group.consumer.heartbeat.interval.ms = 5000 23:16:20 policy-db-migrator | 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: 128 rollback segments are active. 23:16:20 policy-pap | ssl.keystore.type = JKS 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.072895189Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 23:16:20 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:20 kafka | group.consumer.max.heartbeat.interval.ms = 15000 23:16:20 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:20 policy-pap | ssl.protocol = TLSv1.3 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.073642243Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=747.174µs 23:16:20 policy-apex-pdp | ssl.keystore.key = null 23:16:20 kafka | group.consumer.max.session.timeout.ms = 60000 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:20 policy-pap | ssl.provider = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.076763774Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 23:16:20 policy-apex-pdp | ssl.keystore.location = null 23:16:20 kafka | group.consumer.max.size = 2147483647 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: log sequence number 381724; transaction id 299 23:16:20 policy-pap | ssl.secure.random.implementation = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.077511398Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=749.254µs 23:16:20 policy-apex-pdp | ssl.keystore.password = null 23:16:20 kafka | group.consumer.min.heartbeat.interval.ms = 5000 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 23:16:20 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.081323001Z level=info msg="Executing migration" id="copy api_key v1 to v2" 23:16:20 policy-apex-pdp | ssl.keystore.type = JKS 23:16:20 kafka | group.consumer.min.session.timeout.ms = 45000 23:16:20 policy-db-migrator | 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:20 policy-pap | ssl.truststore.certificates = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.081645741Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=323.161µs 23:16:20 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:20 kafka | group.consumer.session.timeout.ms = 45000 23:16:20 policy-db-migrator | 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:20 policy-pap | ssl.truststore.location = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.085168884Z level=info msg="Executing migration" id="Drop old table api_key_v1" 23:16:20 policy-apex-pdp | ssl.provider = null 23:16:20 kafka | group.coordinator.new.enable = false 23:16:20 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 23:16:20 policy-pap | ssl.truststore.password = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.08658497Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=1.415796ms 23:16:20 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:20 kafka | group.coordinator.threads = 1 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] Server socket created on IP: '0.0.0.0'. 23:16:20 policy-pap | ssl.truststore.type = JKS 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.09001474Z level=info msg="Executing migration" id="Update api_key table charset" 23:16:20 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:20 kafka | group.initial.rebalance.delay.ms = 3000 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] Server socket created on IP: '::'. 23:16:20 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.090064382Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=50.882µs 23:16:20 policy-apex-pdp | ssl.truststore.certificates = null 23:16:20 kafka | group.max.session.timeout.ms = 1800000 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] mariadbd: ready for connections. 23:16:20 policy-pap | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.094249877Z level=info msg="Executing migration" id="Add expires to api_key table" 23:16:20 policy-apex-pdp | ssl.truststore.location = null 23:16:20 kafka | group.max.size = 2147483647 23:16:20 policy-db-migrator | 23:16:20 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 23:16:20 policy-pap | [2024-03-15T23:14:19.675+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.096943273Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.688196ms 23:16:20 policy-apex-pdp | ssl.truststore.password = null 23:16:20 kafka | group.min.session.timeout.ms = 6000 23:16:20 policy-db-migrator | 23:16:20 mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Buffer pool(s) load completed at 240315 23:13:49 23:16:20 policy-pap | [2024-03-15T23:14:19.676+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.100479647Z level=info msg="Executing migration" id="Add service account foreign key" 23:16:20 policy-apex-pdp | ssl.truststore.type = JKS 23:16:20 kafka | initial.broker.registration.timeout.ms = 60000 23:16:20 policy-db-migrator | > upgrade 0630-toscanodetype.sql 23:16:20 mariadb | 2024-03-15 23:13:50 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) 23:16:20 policy-pap | [2024-03-15T23:14:19.676+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710544459674 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.104537278Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=4.055231ms 23:16:20 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:20 kafka | inter.broker.listener.name = PLAINTEXT 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | 2024-03-15 23:13:51 52 [Warning] Aborted connection 52 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) 23:16:20 policy-pap | [2024-03-15T23:14:19.678+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-1, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Subscribed to topic(s): policy-pdp-pap 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.107767212Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 23:16:20 policy-apex-pdp | 23:16:20 kafka | inter.broker.protocol.version = 3.6-IV2 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 23:16:20 mariadb | 2024-03-15 23:13:52 97 [Warning] Aborted connection 97 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 23:16:20 policy-pap | [2024-03-15T23:14:19.679+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.107939177Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=172.195µs 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.386+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:20 kafka | kafka.metrics.polling.interval.secs = 10 23:16:20 policy-db-migrator | -------------- 23:16:20 mariadb | 2024-03-15 23:13:53 144 [Warning] Aborted connection 144 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 23:16:20 policy-pap | allow.auto.create.topics = true 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.113957071Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.386+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:20 kafka | kafka.metrics.reporters = [] 23:16:20 policy-db-migrator | 23:16:20 policy-pap | auto.commit.interval.ms = 5000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.116852164Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.897273ms 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.386+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710544463386 23:16:20 kafka | leader.imbalance.check.interval.seconds = 300 23:16:20 policy-db-migrator | 23:16:20 policy-pap | auto.include.jmx.reporter = true 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.120956256Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.387+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Subscribed to topic(s): policy-pdp-pap 23:16:20 kafka | leader.imbalance.per.broker.percentage = 10 23:16:20 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 23:16:20 policy-pap | auto.offset.reset = latest 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.123659193Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.702737ms 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.392+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c99ced55-aa2f-48db-bfd1-cad73b9b866f, alive=false, publisher=null]]: starting 23:16:20 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | bootstrap.servers = [kafka:9092] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.127036192Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.408+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:20 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 23:16:20 policy-pap | check.crcs = true 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.127886619Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=850.167µs 23:16:20 policy-apex-pdp | acks = -1 23:16:20 kafka | log.cleaner.backoff.ms = 15000 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.130968578Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 23:16:20 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:20 kafka | log.cleaner.dedupe.buffer.size = 134217728 23:16:20 policy-db-migrator | 23:16:20 policy-pap | client.id = consumer-policy-pap-2 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.131521796Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=552.928µs 23:16:20 policy-apex-pdp | batch.size = 16384 23:16:20 kafka | log.cleaner.delete.retention.ms = 86400000 23:16:20 policy-db-migrator | 23:16:20 policy-pap | client.rack = 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.135857006Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 23:16:20 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:20 kafka | log.cleaner.enable = true 23:16:20 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 23:16:20 policy-pap | connections.max.idle.ms = 540000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.136700493Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=840.777µs 23:16:20 policy-apex-pdp | buffer.memory = 33554432 23:16:20 kafka | log.cleaner.io.buffer.load.factor = 0.9 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | default.api.timeout.ms = 60000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.14003066Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 23:16:20 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:20 kafka | log.cleaner.io.buffer.size = 524288 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:20 policy-pap | enable.auto.commit = true 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.140858897Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=827.917µs 23:16:20 policy-apex-pdp | client.id = producer-1 23:16:20 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | exclude.internal.topics = true 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.144953189Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 23:16:20 policy-apex-pdp | compression.type = none 23:16:20 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 23:16:20 policy-db-migrator | 23:16:20 policy-pap | fetch.max.bytes = 52428800 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.145812836Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=861.088µs 23:16:20 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:20 kafka | log.cleaner.min.cleanable.ratio = 0.5 23:16:20 policy-db-migrator | 23:16:20 policy-pap | fetch.max.wait.ms = 500 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.150362883Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 23:16:20 policy-apex-pdp | delivery.timeout.ms = 120000 23:16:20 kafka | log.cleaner.min.compaction.lag.ms = 0 23:16:20 policy-db-migrator | > upgrade 0660-toscaparameter.sql 23:16:20 policy-pap | fetch.min.bytes = 1 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.152263134Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.900642ms 23:16:20 policy-apex-pdp | enable.idempotence = true 23:16:20 kafka | log.cleaner.threads = 1 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | group.id = policy-pap 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.156107608Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 23:16:20 policy-apex-pdp | interceptor.classes = [] 23:16:20 kafka | log.cleanup.policy = [delete] 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:20 policy-pap | group.instance.id = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.156324195Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=70.213µs 23:16:20 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:20 kafka | log.dir = /tmp/kafka-logs 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | heartbeat.interval.ms = 3000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.159792836Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 23:16:20 policy-apex-pdp | linger.ms = 0 23:16:20 kafka | log.dirs = /var/lib/kafka/data 23:16:20 policy-db-migrator | 23:16:20 policy-pap | interceptor.classes = [] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.159819857Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=27.381µs 23:16:20 policy-apex-pdp | max.block.ms = 60000 23:16:20 kafka | log.flush.interval.messages = 9223372036854775807 23:16:20 policy-db-migrator | 23:16:20 policy-pap | internal.leave.group.on.close = true 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.164075134Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 23:16:20 policy-apex-pdp | max.in.flight.requests.per.connection = 5 23:16:20 kafka | log.flush.interval.ms = null 23:16:20 policy-db-migrator | > upgrade 0670-toscapolicies.sql 23:16:20 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.1673709Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=3.294126ms 23:16:20 policy-apex-pdp | max.request.size = 1048576 23:16:20 kafka | log.flush.offset.checkpoint.interval.ms = 60000 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | isolation.level = read_uncommitted 23:16:20 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:20 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.171675909Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 23:16:20 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:20 policy-apex-pdp | metadata.max.idle.ms = 300000 23:16:20 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.175221603Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=3.544664ms 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | max.partition.fetch.bytes = 1048576 23:16:20 policy-apex-pdp | metric.reporters = [] 23:16:20 policy-apex-pdp | metrics.num.samples = 2 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.179020835Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 23:16:20 policy-pap | max.poll.interval.ms = 300000 23:16:20 policy-pap | max.poll.records = 500 23:16:20 policy-apex-pdp | metrics.recording.level = INFO 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.179087647Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=67.292µs 23:16:20 policy-pap | metadata.max.age.ms = 300000 23:16:20 kafka | log.index.interval.bytes = 4096 23:16:20 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.184500601Z level=info msg="Executing migration" id="create quota table v1" 23:16:20 policy-pap | metric.reporters = [] 23:16:20 kafka | log.index.size.max.bytes = 10485760 23:16:20 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 23:16:20 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.18539075Z level=info msg="Migration successfully executed" id="create quota table v1" duration=889.629µs 23:16:20 policy-pap | metrics.num.samples = 2 23:16:20 kafka | log.local.retention.bytes = -2 23:16:20 policy-apex-pdp | partitioner.availability.timeout.ms = 0 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.188780969Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 23:16:20 policy-pap | metrics.recording.level = INFO 23:16:20 kafka | log.local.retention.ms = -2 23:16:20 policy-apex-pdp | partitioner.class = null 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.189789552Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.006952ms 23:16:20 policy-pap | metrics.sample.window.ms = 30000 23:16:20 kafka | log.message.downconversion.enable = true 23:16:20 policy-apex-pdp | partitioner.ignore.keys = false 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.193307615Z level=info msg="Executing migration" id="Update quota table charset" 23:16:20 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:20 kafka | log.message.format.version = 3.0-IV1 23:16:20 policy-apex-pdp | receive.buffer.bytes = 32768 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.193345856Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=42.731µs 23:16:20 policy-pap | receive.buffer.bytes = 65536 23:16:20 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 23:16:20 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.196760436Z level=info msg="Executing migration" id="create plugin_setting table" 23:16:20 policy-pap | reconnect.backoff.max.ms = 1000 23:16:20 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 23:16:20 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:20 policy-db-migrator | > upgrade 0690-toscapolicy.sql 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.197742788Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=984.511µs 23:16:20 policy-pap | reconnect.backoff.ms = 50 23:16:20 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 23:16:20 policy-apex-pdp | request.timeout.ms = 30000 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.203155942Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 23:16:20 policy-pap | request.timeout.ms = 30000 23:16:20 kafka | log.message.timestamp.type = CreateTime 23:16:20 policy-apex-pdp | retries = 2147483647 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.203924607Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=768.304µs 23:16:20 policy-pap | retry.backoff.ms = 100 23:16:20 kafka | log.preallocate = false 23:16:20 policy-apex-pdp | retry.backoff.ms = 100 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.208310898Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 23:16:20 policy-pap | sasl.client.callback.handler.class = null 23:16:20 kafka | log.retention.bytes = -1 23:16:20 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.211014725Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.703827ms 23:16:20 policy-pap | sasl.jaas.config = null 23:16:20 kafka | log.retention.check.interval.ms = 300000 23:16:20 policy-apex-pdp | sasl.jaas.config = null 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.214103364Z level=info msg="Executing migration" id="Update plugin_setting table charset" 23:16:20 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:20 kafka | log.retention.hours = 168 23:16:20 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:20 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.214131485Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=28.441µs 23:16:20 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:20 kafka | log.retention.minutes = null 23:16:20 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.218285529Z level=info msg="Executing migration" id="create session table" 23:16:20 policy-pap | sasl.kerberos.service.name = null 23:16:20 kafka | log.retention.ms = null 23:16:20 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.219081724Z level=info msg="Migration successfully executed" id="create session table" duration=795.815µs 23:16:20 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:20 kafka | log.roll.hours = 168 23:16:20 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.223536708Z level=info msg="Executing migration" id="Drop old table playlist table" 23:16:20 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:20 kafka | log.roll.jitter.hours = 0 23:16:20 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.22361566Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=79.412µs 23:16:20 policy-pap | sasl.login.callback.handler.class = null 23:16:20 kafka | log.roll.jitter.ms = null 23:16:20 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.226034118Z level=info msg="Executing migration" id="Drop old table playlist_item table" 23:16:20 policy-pap | sasl.login.class = null 23:16:20 kafka | log.roll.ms = null 23:16:20 policy-apex-pdp | sasl.login.class = null 23:16:20 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.22609575Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=61.772µs 23:16:20 policy-pap | sasl.login.connect.timeout.ms = null 23:16:20 kafka | log.segment.bytes = 1073741824 23:16:20 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.231153993Z level=info msg="Executing migration" id="create playlist table v2" 23:16:20 policy-pap | sasl.login.read.timeout.ms = null 23:16:20 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:20 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.23198899Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=834.587µs 23:16:20 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:20 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:20 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:20 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:20 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.236743043Z level=info msg="Executing migration" id="create playlist item table v2" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.237701574Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=959.141µs 23:16:20 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.241347301Z level=info msg="Executing migration" id="Update playlist table charset" 23:16:20 policy-pap | sasl.mechanism = GSSAPI 23:16:20 kafka | log.segment.delete.delay.ms = 60000 23:16:20 policy-db-migrator | 23:16:20 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.241377102Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=30.801µs 23:16:20 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:20 kafka | max.connection.creation.rate = 2147483647 23:16:20 policy-db-migrator | 23:16:20 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.245734952Z level=info msg="Executing migration" id="Update playlist_item table charset" 23:16:20 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:20 kafka | max.connections = 2147483647 23:16:20 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 23:16:20 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.245786924Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=55.122µs 23:16:20 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:20 kafka | max.connections.per.ip = 2147483647 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.250899038Z level=info msg="Executing migration" id="Add playlist column created_at" 23:16:20 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:20 kafka | max.connections.per.ip.overrides = 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:20 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.255858278Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.95837ms 23:16:20 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:20 kafka | max.incremental.fetch.session.cache.slots = 1000 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.259314319Z level=info msg="Executing migration" id="Add playlist column updated_at" 23:16:20 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:20 kafka | message.max.bytes = 1048588 23:16:20 policy-db-migrator | 23:16:20 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.263405901Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=4.071641ms 23:16:20 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:20 policy-db-migrator | 23:16:20 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.267635737Z level=info msg="Executing migration" id="drop preferences table v2" 23:16:20 kafka | metadata.log.dir = null 23:16:20 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:20 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 23:16:20 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:20 policy-db-migrator | > upgrade 0730-toscaproperty.sql 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.267765481Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=132.544µs 23:16:20 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:20 kafka | metadata.log.max.snapshot.interval.ms = 3600000 23:16:20 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.271942836Z level=info msg="Executing migration" id="drop preferences table v3" 23:16:20 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:20 kafka | metadata.log.segment.bytes = 1073741824 23:16:20 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.272026028Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=83.612µs 23:16:20 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:20 kafka | metadata.log.segment.min.bytes = 8388608 23:16:20 policy-pap | security.protocol = PLAINTEXT 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.284431088Z level=info msg="Executing migration" id="create preferences table v3" 23:16:20 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:20 kafka | metadata.log.segment.ms = 604800000 23:16:20 policy-pap | security.providers = null 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.285486482Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.057454ms 23:16:20 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:20 kafka | metadata.max.idle.interval.ms = 500 23:16:20 policy-pap | send.buffer.bytes = 131072 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.290782042Z level=info msg="Executing migration" id="Update preferences table charset" 23:16:20 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:20 kafka | metadata.max.retention.bytes = 104857600 23:16:20 policy-pap | session.timeout.ms = 45000 23:16:20 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.290851394Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=75.732µs 23:16:20 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:20 kafka | metadata.max.retention.ms = 604800000 23:16:20 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.295850205Z level=info msg="Executing migration" id="Add column team_id in preferences" 23:16:20 policy-apex-pdp | security.providers = null 23:16:20 kafka | metric.reporters = [] 23:16:20 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.30126918Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=5.420094ms 23:16:20 policy-apex-pdp | send.buffer.bytes = 131072 23:16:20 kafka | metrics.num.samples = 2 23:16:20 policy-pap | ssl.cipher.suites = null 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.305901329Z level=info msg="Executing migration" id="Update team_id column values in preferences" 23:16:20 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:20 kafka | metrics.recording.level = INFO 23:16:20 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.306050264Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=149.414µs 23:16:20 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:20 kafka | metrics.sample.window.ms = 30000 23:16:20 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.308970247Z level=info msg="Executing migration" id="Add column week_start in preferences" 23:16:20 policy-apex-pdp | ssl.cipher.suites = null 23:16:20 kafka | min.insync.replicas = 1 23:16:20 policy-pap | ssl.engine.factory.class = null 23:16:20 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.312069287Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.09603ms 23:16:20 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:20 policy-pap | ssl.key.password = null 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.318887517Z level=info msg="Executing migration" id="Add column preferences.json_data" 23:16:20 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:20 kafka | node.id = 1 23:16:20 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.323824096Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=4.934478ms 23:16:20 policy-apex-pdp | ssl.engine.factory.class = null 23:16:20 kafka | num.io.threads = 8 23:16:20 policy-pap | ssl.keystore.certificate.chain = null 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.327319108Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 23:16:20 policy-apex-pdp | ssl.key.password = null 23:16:20 kafka | num.network.threads = 3 23:16:20 policy-pap | ssl.keystore.key = null 23:16:20 policy-db-migrator | 23:16:20 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:20 policy-pap | ssl.keystore.location = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.327426302Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=108.663µs 23:16:20 policy-db-migrator | 23:16:20 kafka | num.partitions = 1 23:16:20 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:20 policy-pap | ssl.keystore.password = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.330980026Z level=info msg="Executing migration" id="Add preferences index org_id" 23:16:20 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 23:16:20 kafka | num.recovery.threads.per.data.dir = 1 23:16:20 policy-apex-pdp | ssl.keystore.key = null 23:16:20 policy-pap | ssl.keystore.type = JKS 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.331890215Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=910.029µs 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | num.replica.alter.log.dirs.threads = null 23:16:20 policy-apex-pdp | ssl.keystore.location = null 23:16:20 policy-pap | ssl.protocol = TLSv1.3 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.337036361Z level=info msg="Executing migration" id="Add preferences index user_id" 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:20 kafka | num.replica.fetchers = 1 23:16:20 policy-apex-pdp | ssl.keystore.password = null 23:16:20 policy-pap | ssl.provider = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.338683104Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.644813ms 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | offset.metadata.max.bytes = 4096 23:16:20 policy-apex-pdp | ssl.keystore.type = JKS 23:16:20 policy-pap | ssl.secure.random.implementation = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.361132796Z level=info msg="Executing migration" id="create alert table v1" 23:16:20 policy-db-migrator | 23:16:20 kafka | offsets.commit.required.acks = -1 23:16:20 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:20 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.362867982Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.739116ms 23:16:20 policy-db-migrator | 23:16:20 kafka | offsets.commit.timeout.ms = 5000 23:16:20 policy-apex-pdp | ssl.provider = null 23:16:20 policy-pap | ssl.truststore.certificates = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.372189302Z level=info msg="Executing migration" id="add index alert org_id & id " 23:16:20 policy-db-migrator | > upgrade 0770-toscarequirement.sql 23:16:20 kafka | offsets.load.buffer.size = 5242880 23:16:20 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:20 policy-pap | ssl.truststore.location = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.373843485Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.654063ms 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | offsets.retention.check.interval.ms = 600000 23:16:20 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:20 policy-pap | ssl.truststore.password = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.377324488Z level=info msg="Executing migration" id="add index alert state" 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 23:16:20 kafka | offsets.retention.minutes = 10080 23:16:20 policy-apex-pdp | ssl.truststore.certificates = null 23:16:20 policy-pap | ssl.truststore.type = JKS 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.378281388Z level=info msg="Migration successfully executed" id="add index alert state" duration=956.901µs 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | offsets.topic.compression.codec = 0 23:16:20 policy-apex-pdp | ssl.truststore.location = null 23:16:20 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.381660167Z level=info msg="Executing migration" id="add index alert dashboard_id" 23:16:20 policy-db-migrator | 23:16:20 kafka | offsets.topic.num.partitions = 50 23:16:20 policy-apex-pdp | ssl.truststore.password = null 23:16:20 policy-pap | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.382646679Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=986.542µs 23:16:20 policy-db-migrator | 23:16:20 kafka | offsets.topic.replication.factor = 1 23:16:20 policy-apex-pdp | ssl.truststore.type = JKS 23:16:20 policy-pap | [2024-03-15T23:14:19.684+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.391324378Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 23:16:20 policy-db-migrator | > upgrade 0780-toscarequirements.sql 23:16:20 kafka | offsets.topic.segment.bytes = 104857600 23:16:20 policy-apex-pdp | transaction.timeout.ms = 60000 23:16:20 policy-pap | [2024-03-15T23:14:19.684+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.392338071Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.011673ms 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 23:16:20 policy-apex-pdp | transactional.id = null 23:16:20 policy-pap | [2024-03-15T23:14:19.684+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710544459684 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.399401048Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 23:16:20 kafka | password.encoder.iterations = 4096 23:16:20 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:20 policy-pap | [2024-03-15T23:14:19.685+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.401729073Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=2.329075ms 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | password.encoder.key.length = 128 23:16:20 policy-apex-pdp | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.405818555Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 23:16:20 policy-db-migrator | 23:16:20 policy-pap | [2024-03-15T23:14:20.012+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 23:16:20 kafka | password.encoder.keyfactory.algorithm = null 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.418+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.406765535Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=947.11µs 23:16:20 policy-db-migrator | 23:16:20 policy-pap | [2024-03-15T23:14:20.192+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:20 kafka | password.encoder.old.secret = null 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.440+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.412127908Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 23:16:20 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 23:16:20 policy-pap | [2024-03-15T23:14:20.474+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@55cb3b7, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@497fd334, org.springframework.security.web.context.SecurityContextHolderFilter@7ce4498f, org.springframework.security.web.header.HeaderWriterFilter@176e839e, org.springframework.security.web.authentication.logout.LogoutFilter@6e489bb8, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@6787bd41, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@7bd7d71c, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@ce0bbd5, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@280c3dc0, org.springframework.security.web.access.ExceptionTranslationFilter@60fe75f7, org.springframework.security.web.access.intercept.AuthorizationFilter@3d3b852e] 23:16:20 kafka | password.encoder.secret = null 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.440+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.425466487Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=13.337929ms 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | [2024-03-15T23:14:21.429+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:20 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.441+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710544463440 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.430403466Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:20 policy-pap | [2024-03-15T23:14:21.555+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:20 kafka | process.roles = [] 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.441+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c99ced55-aa2f-48db-bfd1-cad73b9b866f, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.431133979Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=730.373µs 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | [2024-03-15T23:14:21.574+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 23:16:20 kafka | producer.id.expiration.check.interval.ms = 600000 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.445+00:00|INFO|ServiceManager|main] service manager starting set alive 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.434172937Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 23:16:20 policy-db-migrator | 23:16:20 policy-pap | [2024-03-15T23:14:21.594+00:00|INFO|ServiceManager|main] Policy PAP starting 23:16:20 kafka | producer.id.expiration.ms = 86400000 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.446+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.436591365Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=2.418748ms 23:16:20 policy-db-migrator | 23:16:20 policy-pap | [2024-03-15T23:14:21.594+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 23:16:20 kafka | producer.purgatory.purge.interval.requests = 1000 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.448+00:00|INFO|ServiceManager|main] service manager starting topic sinks 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.444244921Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 23:16:20 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 23:16:20 policy-pap | [2024-03-15T23:14:21.595+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 23:16:20 kafka | queued.max.request.bytes = -1 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.448+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.444848421Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=612.85µs 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | [2024-03-15T23:14:21.595+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 23:16:20 kafka | queued.max.requests = 500 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.450+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.451845426Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 23:16:20 policy-pap | [2024-03-15T23:14:21.595+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 23:16:20 kafka | quota.window.num = 11 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.450+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.45260087Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=761.504µs 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | [2024-03-15T23:14:21.596+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 23:16:20 kafka | quota.window.size.seconds = 1 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.450+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.455845245Z level=info msg="Executing migration" id="create alert_notification table v1" 23:16:20 policy-db-migrator | 23:16:20 policy-pap | [2024-03-15T23:14:21.596+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 23:16:20 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.450+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=2f21b508-fe17-4ab8-9275-1762b58c9ac3, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@e077866 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.456687632Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=842.197µs 23:16:20 policy-db-migrator | 23:16:20 policy-pap | [2024-03-15T23:14:21.601+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=a833d76c-6968-4ee8-9b4d-b3fefbf07611, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1755aee6 23:16:20 kafka | remote.log.manager.task.interval.ms = 30000 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.450+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=2f21b508-fe17-4ab8-9275-1762b58c9ac3, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.461460995Z level=info msg="Executing migration" id="Add column is_default" 23:16:20 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 23:16:20 policy-pap | [2024-03-15T23:14:21.614+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=a833d76c-6968-4ee8-9b4d-b3fefbf07611, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:20 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.450+00:00|INFO|ServiceManager|main] service manager starting Create REST server 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.465391512Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.934027ms 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | [2024-03-15T23:14:21.615+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:20 kafka | remote.log.manager.task.retry.backoff.ms = 500 23:16:20 kafka | remote.log.manager.task.retry.jitter = 0.2 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.468840773Z level=info msg="Executing migration" id="Add column frequency" 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:20 policy-pap | allow.auto.create.topics = true 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.469+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 23:16:20 kafka | remote.log.manager.thread.pool.size = 10 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.47278453Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.943667ms 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | auto.commit.interval.ms = 5000 23:16:20 policy-apex-pdp | [] 23:16:20 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.478413671Z level=info msg="Executing migration" id="Add column send_reminder" 23:16:20 policy-db-migrator | 23:16:20 policy-pap | auto.include.jmx.reporter = true 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.472+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:20 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.482140381Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.72456ms 23:16:20 policy-db-migrator | 23:16:20 policy-pap | auto.offset.reset = latest 23:16:20 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"138adf8a-85b2-4615-8a26-a9d5f452bbb8","timestampMs":1710544463450,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup"} 23:16:20 kafka | remote.log.metadata.manager.class.path = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.487337188Z level=info msg="Executing migration" id="Add column disable_resolve_message" 23:16:20 policy-db-migrator | > upgrade 0820-toscatrigger.sql 23:16:20 policy-pap | bootstrap.servers = [kafka:9092] 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.649+00:00|INFO|ServiceManager|main] service manager starting Rest Server 23:16:20 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.489857289Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=2.520001ms 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | check.crcs = true 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.649+00:00|INFO|ServiceManager|main] service manager starting 23:16:20 kafka | remote.log.metadata.manager.listener.name = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.493333091Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 23:16:20 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.649+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:20 kafka | remote.log.reader.max.pending.tasks = 100 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.494295942Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=962.101µs 23:16:20 policy-pap | client.id = consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.649+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@63f34b70{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@641856{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | remote.log.reader.threads = 10 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.497561597Z level=info msg="Executing migration" id="Update alert table charset" 23:16:20 policy-pap | client.rack = 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.659+00:00|INFO|ServiceManager|main] service manager started 23:16:20 policy-db-migrator | 23:16:20 kafka | remote.log.storage.manager.class.name = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.497589828Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=28.981µs 23:16:20 policy-pap | connections.max.idle.ms = 540000 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.659+00:00|INFO|ServiceManager|main] service manager started 23:16:20 policy-db-migrator | 23:16:20 kafka | remote.log.storage.manager.class.path = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.502564158Z level=info msg="Executing migration" id="Update alert_notification table charset" 23:16:20 policy-pap | default.api.timeout.ms = 60000 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.659+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 23:16:20 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:20 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.502590299Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=27.161µs 23:16:20 policy-pap | enable.auto.commit = true 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | remote.log.storage.system.enable = false 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.659+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@63f34b70{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@641856{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.506850326Z level=info msg="Executing migration" id="create notification_journal table v1" 23:16:20 policy-pap | exclude.internal.topics = true 23:16:20 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 23:16:20 kafka | replica.fetch.backoff.ms = 1000 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.802+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: LbZnmjPNTK-gKtiXPvevcA 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.509917305Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=3.069569ms 23:16:20 policy-pap | fetch.max.bytes = 52428800 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | replica.fetch.max.bytes = 1048576 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.802+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Cluster ID: LbZnmjPNTK-gKtiXPvevcA 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.513626574Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 23:16:20 policy-pap | fetch.max.wait.ms = 500 23:16:20 policy-db-migrator | 23:16:20 kafka | replica.fetch.min.bytes = 1 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.804+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.804+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:20 policy-pap | fetch.min.bytes = 1 23:16:20 policy-db-migrator | 23:16:20 kafka | replica.fetch.response.max.bytes = 10485760 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.811+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] (Re-)joining group 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.514752731Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.123957ms 23:16:20 policy-pap | group.id = a833d76c-6968-4ee8-9b4d-b3fefbf07611 23:16:20 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 23:16:20 kafka | replica.fetch.wait.max.ms = 500 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.842+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Request joining group due to: need to re-join with the given member-id: consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2-e5946d81-a534-498f-907f-81e67fc41f70 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.517951064Z level=info msg="Executing migration" id="drop alert_notification_journal" 23:16:20 policy-pap | group.instance.id = null 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.842+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.518701248Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=749.324µs 23:16:20 policy-pap | heartbeat.interval.ms = 3000 23:16:20 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 23:16:20 kafka | replica.lag.time.max.ms = 30000 23:16:20 policy-apex-pdp | [2024-03-15T23:14:23.843+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] (Re-)joining group 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.523968037Z level=info msg="Executing migration" id="create alert_notification_state table v1" 23:16:20 policy-pap | interceptor.classes = [] 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | replica.selector.class = null 23:16:20 policy-apex-pdp | [2024-03-15T23:14:24.283+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 23:16:20 policy-pap | internal.leave.group.on.close = true 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.524558856Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=593.249µs 23:16:20 policy-db-migrator | 23:16:20 kafka | replica.socket.receive.buffer.bytes = 65536 23:16:20 policy-apex-pdp | [2024-03-15T23:14:24.284+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 23:16:20 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.527188441Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 23:16:20 policy-db-migrator | 23:16:20 kafka | replica.socket.timeout.ms = 30000 23:16:20 policy-apex-pdp | [2024-03-15T23:14:26.853+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Successfully joined group with generation Generation{generationId=1, memberId='consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2-e5946d81-a534-498f-907f-81e67fc41f70', protocol='range'} 23:16:20 policy-pap | isolation.level = read_uncommitted 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.528015948Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=827.167µs 23:16:20 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 23:16:20 kafka | replication.quota.window.num = 11 23:16:20 policy-apex-pdp | [2024-03-15T23:14:26.865+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Finished assignment for group at generation 1: {consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2-e5946d81-a534-498f-907f-81e67fc41f70=Assignment(partitions=[policy-pdp-pap-0])} 23:16:20 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.53026828Z level=info msg="Executing migration" id="Add for to alert table" 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | replication.quota.window.size.seconds = 1 23:16:20 policy-apex-pdp | [2024-03-15T23:14:26.875+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Successfully synced group in generation Generation{generationId=1, memberId='consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2-e5946d81-a534-498f-907f-81e67fc41f70', protocol='range'} 23:16:20 policy-pap | max.partition.fetch.bytes = 1048576 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.536929034Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=6.661544ms 23:16:20 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 23:16:20 kafka | request.timeout.ms = 30000 23:16:20 policy-apex-pdp | [2024-03-15T23:14:26.875+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:20 policy-pap | max.poll.interval.ms = 300000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.541851123Z level=info msg="Executing migration" id="Add column uid in alert_notification" 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | reserved.broker.max.id = 1000 23:16:20 policy-apex-pdp | [2024-03-15T23:14:26.878+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Adding newly assigned partitions: policy-pdp-pap-0 23:16:20 policy-pap | max.poll.records = 500 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.545359156Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.508783ms 23:16:20 policy-db-migrator | 23:16:20 kafka | sasl.client.callback.handler.class = null 23:16:20 policy-apex-pdp | [2024-03-15T23:14:26.886+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Found no committed offset for partition policy-pdp-pap-0 23:16:20 policy-pap | metadata.max.age.ms = 300000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.54861153Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 23:16:20 policy-db-migrator | 23:16:20 kafka | sasl.enabled.mechanisms = [GSSAPI] 23:16:20 policy-apex-pdp | [2024-03-15T23:14:26.897+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:20 policy-pap | metric.reporters = [] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.548765715Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=156.705µs 23:16:20 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:20 kafka | sasl.jaas.config = null 23:16:20 policy-apex-pdp | [2024-03-15T23:14:43.450+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:20 policy-pap | metrics.num.samples = 2 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.552234257Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:20 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ba446a9c-6622-41fc-a636-ab4cca84c30b","timestampMs":1710544483450,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup"} 23:16:20 policy-pap | metrics.recording.level = INFO 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.552870098Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=635.28µs 23:16:20 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 23:16:20 kafka | sasl.kerberos.min.time.before.relogin = 60000 23:16:20 policy-apex-pdp | [2024-03-15T23:14:43.475+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:20 policy-pap | metrics.sample.window.ms = 30000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.557084283Z level=info msg="Executing migration" id="Remove unique index org_id_name" 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 23:16:20 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ba446a9c-6622-41fc-a636-ab4cca84c30b","timestampMs":1710544483450,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup"} 23:16:20 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.557949291Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=861.998µs 23:16:20 policy-db-migrator | 23:16:20 kafka | sasl.kerberos.service.name = null 23:16:20 policy-apex-pdp | [2024-03-15T23:14:43.479+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:20 policy-pap | receive.buffer.bytes = 65536 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.561250957Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 23:16:20 policy-db-migrator | 23:16:20 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:20 policy-apex-pdp | [2024-03-15T23:14:43.639+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:20 policy-pap | reconnect.backoff.max.ms = 1000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.567183418Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=5.927141ms 23:16:20 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:20 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:20 policy-apex-pdp | {"source":"pap-bc9b7321-9b51-42ef-97ab-0ee05971a3f1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"a918cf66-cf68-45ea-b4be-5105781f3d6f","timestampMs":1710544483578,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:20 policy-pap | reconnect.backoff.ms = 50 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.57190503Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | sasl.login.callback.handler.class = null 23:16:20 policy-apex-pdp | [2024-03-15T23:14:43.657+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 23:16:20 policy-pap | request.timeout.ms = 30000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.571994343Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=89.923µs 23:16:20 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 23:16:20 kafka | sasl.login.class = null 23:16:20 policy-apex-pdp | [2024-03-15T23:14:43.657+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 23:16:20 policy-pap | retry.backoff.ms = 100 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.576269131Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | sasl.login.connect.timeout.ms = null 23:16:20 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c8f034e9-82b5-4f8d-b347-826ceabb026b","timestampMs":1710544483657,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup"} 23:16:20 policy-pap | sasl.client.callback.handler.class = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.577124448Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=855.757µs 23:16:20 policy-db-migrator | 23:16:20 kafka | sasl.login.read.timeout.ms = null 23:16:20 policy-apex-pdp | [2024-03-15T23:14:43.663+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:20 policy-pap | sasl.jaas.config = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.581084866Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 23:16:20 policy-db-migrator | 23:16:20 kafka | sasl.login.refresh.buffer.seconds = 300 23:16:20 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a918cf66-cf68-45ea-b4be-5105781f3d6f","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"dc56e0cf-4911-4e66-a485-4debe52e093d","timestampMs":1710544483663,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:20 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.582057537Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=972.601µs 23:16:20 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:20 kafka | sasl.login.refresh.min.period.seconds = 60 23:16:20 policy-apex-pdp | [2024-03-15T23:14:43.677+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:20 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.585613211Z level=info msg="Executing migration" id="Drop old annotation table v4" 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | sasl.login.refresh.window.factor = 0.8 23:16:20 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c8f034e9-82b5-4f8d-b347-826ceabb026b","timestampMs":1710544483657,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup"} 23:16:20 policy-pap | sasl.kerberos.service.name = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.585722455Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=109.544µs 23:16:20 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 23:16:20 kafka | sasl.login.refresh.window.jitter = 0.05 23:16:20 policy-apex-pdp | [2024-03-15T23:14:43.677+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:20 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.590260511Z level=info msg="Executing migration" id="create annotation table v5" 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | sasl.login.retry.backoff.max.ms = 10000 23:16:20 policy-apex-pdp | [2024-03-15T23:14:43.685+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:20 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.59178979Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.531849ms 23:16:20 policy-db-migrator | 23:16:20 kafka | sasl.login.retry.backoff.ms = 100 23:16:20 policy-pap | sasl.login.callback.handler.class = null 23:16:20 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a918cf66-cf68-45ea-b4be-5105781f3d6f","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"dc56e0cf-4911-4e66-a485-4debe52e093d","timestampMs":1710544483663,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.595083906Z level=info msg="Executing migration" id="add index annotation 0 v3" 23:16:20 policy-db-migrator | 23:16:20 kafka | sasl.mechanism.controller.protocol = GSSAPI 23:16:20 policy-pap | sasl.login.class = null 23:16:20 policy-apex-pdp | [2024-03-15T23:14:43.685+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.596296135Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.212859ms 23:16:20 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:20 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 23:16:20 policy-pap | sasl.login.connect.timeout.ms = null 23:16:20 policy-apex-pdp | [2024-03-15T23:14:43.722+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.599451117Z level=info msg="Executing migration" id="add index annotation 1 v3" 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | sasl.oauthbearer.clock.skew.seconds = 30 23:16:20 policy-pap | sasl.login.read.timeout.ms = null 23:16:20 policy-apex-pdp | {"source":"pap-bc9b7321-9b51-42ef-97ab-0ee05971a3f1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"5b704fa0-786f-426e-ab49-de6046b0a817","timestampMs":1710544483579,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.600251983Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=800.485µs 23:16:20 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 23:16:20 kafka | sasl.oauthbearer.expected.audience = null 23:16:20 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:20 policy-apex-pdp | [2024-03-15T23:14:43.724+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.604443727Z level=info msg="Executing migration" id="add index annotation 2 v3" 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | sasl.oauthbearer.expected.issuer = null 23:16:20 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:20 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"5b704fa0-786f-426e-ab49-de6046b0a817","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"f008dd50-9471-4f36-80d6-f78aa5ec5aec","timestampMs":1710544483724,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.605335006Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=891.199µs 23:16:20 policy-db-migrator | 23:16:20 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:20 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:20 policy-apex-pdp | [2024-03-15T23:14:43.732+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.608782597Z level=info msg="Executing migration" id="add index annotation 3 v3" 23:16:20 policy-db-migrator | 23:16:20 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:20 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:20 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"5b704fa0-786f-426e-ab49-de6046b0a817","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"f008dd50-9471-4f36-80d6-f78aa5ec5aec","timestampMs":1710544483724,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.609979756Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.196149ms 23:16:20 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:20 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:20 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:20 policy-apex-pdp | [2024-03-15T23:14:43.733+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.612974592Z level=info msg="Executing migration" id="add index annotation 4 v3" 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | sasl.oauthbearer.jwks.endpoint.url = null 23:16:20 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:20 policy-apex-pdp | [2024-03-15T23:14:43.779+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.614076487Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.100765ms 23:16:20 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 23:16:20 kafka | sasl.oauthbearer.scope.claim.name = scope 23:16:20 policy-pap | sasl.mechanism = GSSAPI 23:16:20 policy-apex-pdp | {"source":"pap-bc9b7321-9b51-42ef-97ab-0ee05971a3f1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"d2465129-9ed1-4fca-970a-e7296db7245c","timestampMs":1710544483752,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.618350065Z level=info msg="Executing migration" id="Update annotation table charset" 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | sasl.oauthbearer.sub.claim.name = sub 23:16:20 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:20 policy-apex-pdp | [2024-03-15T23:14:43.781+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.618393256Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=44.291µs 23:16:20 policy-db-migrator | 23:16:20 kafka | sasl.oauthbearer.token.endpoint.url = null 23:16:20 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:20 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"d2465129-9ed1-4fca-970a-e7296db7245c","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ae64ed1a-2bd0-452c-a2a3-d83350bdbf1d","timestampMs":1710544483781,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.622190009Z level=info msg="Executing migration" id="Add column region_id to annotation table" 23:16:20 policy-db-migrator | 23:16:20 kafka | sasl.server.callback.handler.class = null 23:16:20 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:20 policy-apex-pdp | [2024-03-15T23:14:43.791+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.626440345Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.255067ms 23:16:20 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:20 kafka | sasl.server.max.receive.size = 524288 23:16:20 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:20 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"d2465129-9ed1-4fca-970a-e7296db7245c","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ae64ed1a-2bd0-452c-a2a3-d83350bdbf1d","timestampMs":1710544483781,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.631428886Z level=info msg="Executing migration" id="Drop category_id index" 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | security.inter.broker.protocol = PLAINTEXT 23:16:20 policy-apex-pdp | [2024-03-15T23:14:43.791+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.632519971Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.091615ms 23:16:20 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 23:16:20 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:20 kafka | security.providers = null 23:16:20 policy-apex-pdp | [2024-03-15T23:14:56.164+00:00|INFO|RequestLog|qtp1068445309-33] 172.17.0.2 - policyadmin [15/Mar/2024:23:14:56 +0000] "GET /metrics HTTP/1.1" 200 10648 "-" "Prometheus/2.50.1" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.635816617Z level=info msg="Executing migration" id="Add column tags to annotation table" 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:20 kafka | server.max.startup.time.ms = 9223372036854775807 23:16:20 policy-apex-pdp | [2024-03-15T23:15:56.083+00:00|INFO|RequestLog|qtp1068445309-28] 172.17.0.2 - policyadmin [15/Mar/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10650 "-" "Prometheus/2.50.1" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.642809362Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=6.992595ms 23:16:20 policy-db-migrator | 23:16:20 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:20 kafka | socket.connection.setup.timeout.max.ms = 30000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.647452582Z level=info msg="Executing migration" id="Create annotation_tag table v2" 23:16:20 policy-db-migrator | 23:16:20 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:20 kafka | socket.connection.setup.timeout.ms = 10000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.647988369Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=535.847µs 23:16:20 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 23:16:20 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:20 kafka | socket.listen.backlog.size = 50 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.651017526Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:20 kafka | socket.receive.buffer.bytes = 102400 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.651618706Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=600.88µs 23:16:20 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 23:16:20 policy-pap | security.protocol = PLAINTEXT 23:16:20 kafka | socket.request.max.bytes = 104857600 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.655625185Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | security.providers = null 23:16:20 kafka | socket.send.buffer.bytes = 102400 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.65734373Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.715975ms 23:16:20 policy-db-migrator | 23:16:20 policy-pap | send.buffer.bytes = 131072 23:16:20 kafka | ssl.cipher.suites = [] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.662750654Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 23:16:20 policy-db-migrator | 23:16:20 policy-pap | session.timeout.ms = 45000 23:16:20 kafka | ssl.client.auth = none 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.674175782Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=11.426338ms 23:16:20 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 23:16:20 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:20 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.678500991Z level=info msg="Executing migration" id="Create annotation_tag table v3" 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:20 kafka | ssl.endpoint.identification.algorithm = https 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.679194623Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=693.302µs 23:16:20 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 23:16:20 policy-pap | ssl.cipher.suites = null 23:16:20 kafka | ssl.engine.factory.class = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.682735667Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:20 kafka | ssl.key.password = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.683880034Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.144457ms 23:16:20 policy-db-migrator | 23:16:20 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:20 kafka | ssl.keymanager.algorithm = SunX509 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.693027329Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 23:16:20 policy-db-migrator | 23:16:20 policy-pap | ssl.engine.factory.class = null 23:16:20 kafka | ssl.keystore.certificate.chain = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.693553125Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=532.037µs 23:16:20 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:20 policy-pap | ssl.key.password = null 23:16:20 kafka | ssl.keystore.key = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.699086364Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.699786426Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=699.853µs 23:16:20 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 23:16:20 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:20 kafka | ssl.keystore.location = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.704213139Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | ssl.keystore.certificate.chain = null 23:16:20 kafka | ssl.keystore.password = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.704624682Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=411.054µs 23:16:20 policy-db-migrator | 23:16:20 policy-pap | ssl.keystore.key = null 23:16:20 kafka | ssl.keystore.type = JKS 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.716813864Z level=info msg="Executing migration" id="Add created time to annotation table" 23:16:20 policy-db-migrator | 23:16:20 policy-pap | ssl.keystore.location = null 23:16:20 kafka | ssl.principal.mapping.rules = DEFAULT 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.721639269Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.826215ms 23:16:20 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:20 kafka | ssl.protocol = TLSv1.3 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.725315388Z level=info msg="Executing migration" id="Add updated time to annotation table" 23:16:20 policy-pap | ssl.keystore.password = null 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | ssl.provider = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.729884735Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.568847ms 23:16:20 policy-pap | ssl.keystore.type = JKS 23:16:20 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:20 kafka | ssl.secure.random.implementation = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.734346308Z level=info msg="Executing migration" id="Add index for created in annotation table" 23:16:20 policy-pap | ssl.protocol = TLSv1.3 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | ssl.trustmanager.algorithm = PKIX 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.735387192Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.041034ms 23:16:20 policy-pap | ssl.provider = null 23:16:20 policy-db-migrator | 23:16:20 kafka | ssl.truststore.certificates = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.741676204Z level=info msg="Executing migration" id="Add index for updated in annotation table" 23:16:20 policy-pap | ssl.secure.random.implementation = null 23:16:20 kafka | ssl.truststore.location = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.742711478Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.035164ms 23:16:20 policy-db-migrator | 23:16:20 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:20 kafka | ssl.truststore.password = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.746453458Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 23:16:20 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 23:16:20 policy-pap | ssl.truststore.certificates = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.746771308Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=317.47µs 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | ssl.truststore.location = null 23:16:20 kafka | ssl.truststore.type = JKS 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.751890883Z level=info msg="Executing migration" id="Add epoch_end column" 23:16:20 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:20 policy-pap | ssl.truststore.password = null 23:16:20 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.758467105Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=6.570881ms 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | ssl.truststore.type = JKS 23:16:20 kafka | transaction.max.timeout.ms = 900000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.763042392Z level=info msg="Executing migration" id="Add index for epoch_end" 23:16:20 policy-db-migrator | 23:16:20 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:20 kafka | transaction.partition.verification.enable = true 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.764093926Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.051784ms 23:16:20 policy-db-migrator | 23:16:20 policy-pap | 23:16:20 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.767678201Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 23:16:20 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 23:16:20 policy-pap | [2024-03-15T23:14:21.622+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:20 kafka | transaction.state.log.load.buffer.size = 5242880 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.76796382Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=282.549µs 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | [2024-03-15T23:14:21.622+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:20 kafka | transaction.state.log.min.isr = 2 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.772705213Z level=info msg="Executing migration" id="Move region to single row" 23:16:20 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:20 policy-pap | [2024-03-15T23:14:21.622+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710544461622 23:16:20 kafka | transaction.state.log.num.partitions = 50 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.773622683Z level=info msg="Migration successfully executed" id="Move region to single row" duration=916.579µs 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | [2024-03-15T23:14:21.622+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Subscribed to topic(s): policy-pdp-pap 23:16:20 kafka | transaction.state.log.replication.factor = 3 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.778965174Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 23:16:20 policy-db-migrator | 23:16:20 policy-pap | [2024-03-15T23:14:21.623+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 23:16:20 kafka | transaction.state.log.segment.bytes = 104857600 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.780010688Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.045384ms 23:16:20 policy-db-migrator | 23:16:20 policy-pap | [2024-03-15T23:14:21.623+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=e833a44a-4d39-4a1d-8bf3-bd02ef013e96, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@17ebbf1e 23:16:20 kafka | transactional.id.expiration.ms = 604800000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.785275238Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 23:16:20 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:20 policy-pap | [2024-03-15T23:14:21.623+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=e833a44a-4d39-4a1d-8bf3-bd02ef013e96, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:20 kafka | unclean.leader.election.enable = false 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.786656922Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.377505ms 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | [2024-03-15T23:14:21.623+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:20 kafka | unstable.api.versions.enable = false 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.791892491Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 23:16:20 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:20 policy-pap | allow.auto.create.topics = true 23:16:20 kafka | zookeeper.clientCnxnSocket = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.793394749Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.501669ms 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | auto.commit.interval.ms = 5000 23:16:20 kafka | zookeeper.connect = zookeeper:2181 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.798375829Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 23:16:20 policy-db-migrator | 23:16:20 policy-pap | auto.include.jmx.reporter = true 23:16:20 kafka | zookeeper.connection.timeout.ms = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.799550137Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.174268ms 23:16:20 policy-db-migrator | 23:16:20 policy-pap | auto.offset.reset = latest 23:16:20 kafka | zookeeper.max.in.flight.requests = 10 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.803559596Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 23:16:20 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:20 policy-pap | bootstrap.servers = [kafka:9092] 23:16:20 kafka | zookeeper.metadata.migration.enable = false 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.804947911Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.388765ms 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | check.crcs = true 23:16:20 kafka | zookeeper.session.timeout.ms = 18000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.809987803Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 23:16:20 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:20 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:20 kafka | zookeeper.set.acl = false 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.81143905Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.450897ms 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | client.id = consumer-policy-pap-4 23:16:20 kafka | zookeeper.ssl.cipher.suites = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.818522178Z level=info msg="Executing migration" id="Increase tags column to length 4096" 23:16:20 policy-db-migrator | 23:16:20 policy-pap | client.rack = 23:16:20 kafka | zookeeper.ssl.client.enable = false 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.818679673Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=157.935µs 23:16:20 policy-db-migrator | 23:16:20 policy-pap | connections.max.idle.ms = 540000 23:16:20 kafka | zookeeper.ssl.crl.enable = false 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.82358133Z level=info msg="Executing migration" id="create test_data table" 23:16:20 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:20 policy-pap | default.api.timeout.ms = 60000 23:16:20 kafka | zookeeper.ssl.enabled.protocols = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.825183652Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.601422ms 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | enable.auto.commit = true 23:16:20 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.830775232Z level=info msg="Executing migration" id="create dashboard_version table v1" 23:16:20 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:20 kafka | zookeeper.ssl.keystore.location = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.831694372Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=918.75µs 23:16:20 policy-pap | exclude.internal.topics = true 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | zookeeper.ssl.keystore.password = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.835861496Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 23:16:20 policy-pap | fetch.max.bytes = 52428800 23:16:20 policy-db-migrator | 23:16:20 kafka | zookeeper.ssl.keystore.type = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.836896649Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.031283ms 23:16:20 policy-pap | fetch.max.wait.ms = 500 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.840738443Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 23:16:20 policy-pap | fetch.min.bytes = 1 23:16:20 kafka | zookeeper.ssl.ocsp.enable = false 23:16:20 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.841804087Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.066194ms 23:16:20 policy-pap | group.id = policy-pap 23:16:20 kafka | zookeeper.ssl.protocol = TLSv1.2 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | group.instance.id = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.846268221Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 23:16:20 kafka | zookeeper.ssl.truststore.location = null 23:16:20 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:20 policy-pap | heartbeat.interval.ms = 3000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.84655272Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=284.489µs 23:16:20 kafka | zookeeper.ssl.truststore.password = null 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | interceptor.classes = [] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.84997752Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 23:16:20 kafka | zookeeper.ssl.truststore.type = null 23:16:20 policy-db-migrator | 23:16:20 policy-pap | internal.leave.group.on.close = true 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.850428595Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=450.854µs 23:16:20 kafka | (kafka.server.KafkaConfig) 23:16:20 policy-db-migrator | 23:16:20 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.853896676Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 23:16:20 kafka | [2024-03-15 23:13:53,437] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:20 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:20 policy-pap | isolation.level = read_uncommitted 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.854053001Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=156.145µs 23:16:20 kafka | [2024-03-15 23:13:53,437] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.85900046Z level=info msg="Executing migration" id="create team table" 23:16:20 kafka | [2024-03-15 23:13:53,438] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:20 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:20 policy-pap | max.partition.fetch.bytes = 1048576 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.860339884Z level=info msg="Migration successfully executed" id="create team table" duration=1.339313ms 23:16:20 kafka | [2024-03-15 23:13:53,441] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | max.poll.interval.ms = 300000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.866605535Z level=info msg="Executing migration" id="add index team.org_id" 23:16:20 kafka | [2024-03-15 23:13:53,468] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 23:16:20 policy-db-migrator | 23:16:20 policy-pap | max.poll.records = 500 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.868247598Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.641773ms 23:16:20 kafka | [2024-03-15 23:13:53,472] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 23:16:20 policy-db-migrator | 23:16:20 policy-pap | metadata.max.age.ms = 300000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.872526676Z level=info msg="Executing migration" id="add unique index team_org_id_name" 23:16:20 kafka | [2024-03-15 23:13:53,480] INFO Loaded 0 logs in 12ms (kafka.log.LogManager) 23:16:20 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:20 policy-pap | metric.reporters = [] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.874051295Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.524759ms 23:16:20 kafka | [2024-03-15 23:13:53,481] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | metrics.num.samples = 2 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.879163109Z level=info msg="Executing migration" id="Add column uid in team" 23:16:20 kafka | [2024-03-15 23:13:53,482] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 23:16:20 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:20 policy-pap | metrics.recording.level = INFO 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.883140887Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=3.982458ms 23:16:20 kafka | [2024-03-15 23:13:53,493] INFO Starting the log cleaner (kafka.log.LogCleaner) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | metrics.sample.window.ms = 30000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.889344517Z level=info msg="Executing migration" id="Update uid column values in team" 23:16:20 kafka | [2024-03-15 23:13:53,536] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 23:16:20 policy-db-migrator | 23:16:20 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.889686908Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=342.991µs 23:16:20 kafka | [2024-03-15 23:13:53,567] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 23:16:20 policy-db-migrator | 23:16:20 policy-pap | receive.buffer.bytes = 65536 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.893385947Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 23:16:20 kafka | [2024-03-15 23:13:53,582] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 23:16:20 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 23:16:20 policy-pap | reconnect.backoff.max.ms = 1000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.895095942Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.709265ms 23:16:20 kafka | [2024-03-15 23:13:53,608] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | reconnect.backoff.ms = 50 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.90063327Z level=info msg="Executing migration" id="create team member table" 23:16:20 kafka | [2024-03-15 23:13:53,995] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:20 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:20 policy-pap | request.timeout.ms = 30000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.901979174Z level=info msg="Migration successfully executed" id="create team member table" duration=1.346034ms 23:16:20 kafka | [2024-03-15 23:13:54,014] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | retry.backoff.ms = 100 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.907996827Z level=info msg="Executing migration" id="add index team_member.org_id" 23:16:20 kafka | [2024-03-15 23:13:54,014] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:20 policy-db-migrator | 23:16:20 policy-pap | sasl.client.callback.handler.class = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.909681962Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.682865ms 23:16:20 kafka | [2024-03-15 23:13:54,020] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.913175874Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 23:16:20 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:20 policy-pap | sasl.jaas.config = null 23:16:20 kafka | [2024-03-15 23:13:54,024] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.914204047Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.027273ms 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:20 kafka | [2024-03-15 23:13:54,050] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.917852255Z level=info msg="Executing migration" id="add index team_member.team_id" 23:16:20 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:20 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:20 kafka | [2024-03-15 23:13:54,053] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.918921129Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.068794ms 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | sasl.kerberos.service.name = null 23:16:20 kafka | [2024-03-15 23:13:54,055] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.92298831Z level=info msg="Executing migration" id="Add column email to team table" 23:16:20 policy-db-migrator | 23:16:20 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:20 kafka | [2024-03-15 23:13:54,058] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.927911028Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.922458ms 23:16:20 policy-db-migrator | 23:16:20 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:20 kafka | [2024-03-15 23:13:54,059] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.936281358Z level=info msg="Executing migration" id="Add column external to team_member table" 23:16:20 policy-db-migrator | > upgrade 0100-pdp.sql 23:16:20 policy-pap | sasl.login.callback.handler.class = null 23:16:20 kafka | [2024-03-15 23:13:54,074] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 23:16:20 policy-pap | sasl.login.class = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.942985764Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=6.705785ms 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:13:54,082] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 23:16:20 policy-pap | sasl.login.connect.timeout.ms = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.94598114Z level=info msg="Executing migration" id="Add column permission to team_member table" 23:16:20 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 23:16:20 kafka | [2024-03-15 23:13:54,114] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 23:16:20 policy-pap | sasl.login.read.timeout.ms = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.95094675Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.922938ms 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:13:54,140] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1710544434129,1710544434129,1,0,0,72057608227586049,258,0,27 23:16:20 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.95437457Z level=info msg="Executing migration" id="create dashboard acl table" 23:16:20 policy-db-migrator | 23:16:20 kafka | (kafka.zk.KafkaZkClient) 23:16:20 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.955436214Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.063184ms 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:13:54,142] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 23:16:20 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.961622743Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 23:16:20 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:20 kafka | [2024-03-15 23:13:54,197] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 23:16:20 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.962720809Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.097666ms 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:13:54,204] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:20 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.965790168Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 23:16:20 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:20 kafka | [2024-03-15 23:13:54,211] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:20 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.967679898Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.887751ms 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:13:54,213] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:20 policy-pap | sasl.mechanism = GSSAPI 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.976526073Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:13:54,223] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 23:16:20 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.977532215Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.005482ms 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:13:54,233] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 23:16:20 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.982290649Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 23:16:20 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 23:16:20 kafka | [2024-03-15 23:13:54,237] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 23:16:20 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.983346253Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.055574ms 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:13:54,243] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 23:16:20 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.987854668Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 23:16:20 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:20 kafka | [2024-03-15 23:13:54,244] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 23:16:20 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.989539312Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.687184ms 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:13:54,248] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 23:16:20 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.995270796Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:13:54,267] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:20 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.996382082Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.113026ms 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:13:54,272] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:20 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:50.999636957Z level=info msg="Executing migration" id="add index dashboard_permission" 23:16:20 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 23:16:20 kafka | [2024-03-15 23:13:54,273] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 23:16:20 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.001167326Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.528679ms 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:13:54,282] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 23:16:20 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.004758111Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 23:16:20 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 23:16:20 kafka | [2024-03-15 23:13:54,286] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 23:16:20 policy-pap | security.protocol = PLAINTEXT 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.005574278Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=815.856µs 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:13:54,291] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 23:16:20 policy-pap | security.providers = null 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:13:54,294] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 23:16:20 policy-pap | send.buffer.bytes = 131072 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.010879748Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:13:54,296] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 23:16:20 policy-pap | session.timeout.ms = 45000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.011325102Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=444.614µs 23:16:20 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 23:16:20 kafka | [2024-03-15 23:13:54,312] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.015890688Z level=info msg="Executing migration" id="create tag table" 23:16:20 kafka | [2024-03-15 23:13:54,316] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:20 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 23:16:20 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.016735245Z level=info msg="Migration successfully executed" id="create tag table" duration=844.637µs 23:16:20 kafka | [2024-03-15 23:13:54,316] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | ssl.cipher.suites = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.022683666Z level=info msg="Executing migration" id="add index tag.key_value" 23:16:20 kafka | [2024-03-15 23:13:54,324] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 23:16:20 policy-db-migrator | 23:16:20 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.024401471Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.717575ms 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.030379743Z level=info msg="Executing migration" id="create login attempt table" 23:16:20 kafka | [2024-03-15 23:13:54,332] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 23:16:20 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 23:16:20 policy-pap | ssl.engine.factory.class = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.031303493Z level=info msg="Migration successfully executed" id="create login attempt table" duration=922.729µs 23:16:20 kafka | [2024-03-15 23:13:54,334] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | ssl.key.password = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.034658Z level=info msg="Executing migration" id="add index login_attempt.username" 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:13:54,334] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:20 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.035604591Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=946.44µs 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:13:54,334] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:20 policy-pap | ssl.keystore.certificate.chain = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.038911637Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 23:16:20 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 23:16:20 kafka | [2024-03-15 23:13:54,335] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 23:16:20 policy-pap | ssl.keystore.key = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.039880938Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=968.932µs 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:13:54,337] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 23:16:20 policy-pap | ssl.keystore.location = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.045182478Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 23:16:20 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 23:16:20 kafka | [2024-03-15 23:13:54,337] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 23:16:20 policy-pap | ssl.keystore.password = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.060452397Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=15.269739ms 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:13:54,338] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 23:16:20 policy-pap | ssl.keystore.type = JKS 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.068713672Z level=info msg="Executing migration" id="create login_attempt v2" 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:13:54,338] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 23:16:20 policy-pap | ssl.protocol = TLSv1.3 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.070033115Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.315572ms 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:13:54,339] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 23:16:20 policy-pap | ssl.provider = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.073601029Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 23:16:20 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 23:16:20 kafka | [2024-03-15 23:13:54,341] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 23:16:20 policy-pap | ssl.secure.random.implementation = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.076607735Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=3.005166ms 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:13:54,349] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 23:16:20 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.082888227Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 23:16:20 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 23:16:20 kafka | [2024-03-15 23:13:54,350] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:20 policy-pap | ssl.truststore.certificates = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.08329983Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=408.923µs 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:13:54,353] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 23:16:20 policy-pap | ssl.truststore.location = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.08734113Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:13:54,354] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:20 policy-pap | ssl.truststore.password = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.088408114Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.072465ms 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:13:54,354] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 23:16:20 policy-pap | ssl.truststore.type = JKS 23:16:20 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 23:16:20 kafka | [2024-03-15 23:13:54,354] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 23:16:20 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.093555249Z level=info msg="Executing migration" id="create user auth table" 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:13:54,355] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 23:16:20 policy-pap | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.094914972Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.359553ms 23:16:20 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 23:16:20 kafka | [2024-03-15 23:13:54,357] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 23:16:20 policy-pap | [2024-03-15T23:14:21.627+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.100505782Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 23:16:20 policy-db-migrator | JOIN pdpstatistics b 23:16:20 kafka | [2024-03-15 23:13:54,358] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 23:16:20 policy-pap | [2024-03-15T23:14:21.627+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.10170022Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.192358ms 23:16:20 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 23:16:20 kafka | [2024-03-15 23:13:54,364] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 23:16:20 policy-pap | [2024-03-15T23:14:21.627+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710544461627 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.106627128Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 23:16:20 policy-db-migrator | SET a.id = b.id 23:16:20 kafka | [2024-03-15 23:13:54,365] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 23:16:20 policy-pap | [2024-03-15T23:14:21.628+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.106748462Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=121.904µs 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:13:54,366] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 23:16:20 policy-pap | [2024-03-15T23:14:21.628+00:00|INFO|ServiceManager|main] Policy PAP starting topics 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.112248268Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:13:54,366] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.120786392Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=8.536534ms 23:16:20 policy-db-migrator | 23:16:20 policy-pap | [2024-03-15T23:14:21.628+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=e833a44a-4d39-4a1d-8bf3-bd02ef013e96, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:20 kafka | [2024-03-15 23:13:54,366] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.126188545Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 23:16:20 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 23:16:20 policy-pap | [2024-03-15T23:14:21.628+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=a833d76c-6968-4ee8-9b4d-b3fefbf07611, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:20 kafka | [2024-03-15 23:13:54,367] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.131855987Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.666042ms 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | [2024-03-15T23:14:21.628+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=ad60098f-8467-4f6a-8a6c-235480b406c4, alive=false, publisher=null]]: starting 23:16:20 kafka | [2024-03-15 23:13:54,367] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.137274491Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 23:16:20 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 23:16:20 policy-pap | [2024-03-15T23:14:21.647+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:20 kafka | [2024-03-15 23:13:54,370] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.14287755Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.602899ms 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | acks = -1 23:16:20 kafka | [2024-03-15 23:13:54,371] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.7:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.148252723Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 23:16:20 policy-db-migrator | 23:16:20 policy-pap | auto.include.jmx.reporter = true 23:16:20 kafka | [2024-03-15 23:13:54,372] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.154151072Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.901919ms 23:16:20 policy-db-migrator | 23:16:20 policy-pap | batch.size = 16384 23:16:20 kafka | [2024-03-15 23:13:54,379] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.157744027Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 23:16:20 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 23:16:20 policy-pap | bootstrap.servers = [kafka:9092] 23:16:20 kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.158857923Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.113516ms 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | buffer.memory = 33554432 23:16:20 kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.163586144Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 23:16:20 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:20 kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.172118858Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=8.527604ms 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | client.id = producer-1 23:16:20 kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.178363708Z level=info msg="Executing migration" id="create server_lock table" 23:16:20 policy-db-migrator | 23:16:20 policy-pap | compression.type = none 23:16:20 kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.179146113Z level=info msg="Migration successfully executed" id="create server_lock table" duration=781.125µs 23:16:20 policy-db-migrator | 23:16:20 policy-pap | connections.max.idle.ms = 540000 23:16:20 kafka | [2024-03-15 23:13:54,384] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.18278568Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 23:16:20 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 23:16:20 policy-pap | delivery.timeout.ms = 120000 23:16:20 kafka | [2024-03-15 23:13:54,388] INFO Kafka version: 7.6.0-ccs (org.apache.kafka.common.utils.AppInfoParser) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.184544956Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.759766ms 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | enable.idempotence = true 23:16:20 kafka | [2024-03-15 23:13:54,388] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.189467694Z level=info msg="Executing migration" id="create user auth token table" 23:16:20 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 23:16:20 policy-pap | interceptor.classes = [] 23:16:20 kafka | [2024-03-15 23:13:54,388] INFO Kafka commitId: 1991cb733c81d6791626f88253a042b2ec835ab8 (org.apache.kafka.common.utils.AppInfoParser) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.191091666Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.626872ms 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:20 kafka | [2024-03-15 23:13:54,388] INFO Kafka startTimeMs: 1710544434380 (org.apache.kafka.common.utils.AppInfoParser) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.201027825Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 23:16:20 policy-db-migrator | 23:16:20 policy-pap | linger.ms = 0 23:16:20 kafka | [2024-03-15 23:13:54,390] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.202291046Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.26203ms 23:16:20 policy-db-migrator | 23:16:20 policy-pap | max.block.ms = 60000 23:16:20 kafka | [2024-03-15 23:13:54,495] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.206527581Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 23:16:20 policy-db-migrator | > upgrade 0210-sequence.sql 23:16:20 policy-pap | max.in.flight.requests.per.connection = 5 23:16:20 kafka | [2024-03-15 23:13:54,556] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.209493666Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=2.963565ms 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | max.request.size = 1048576 23:16:20 kafka | [2024-03-15 23:13:54,632] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.214115805Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:20 policy-pap | metadata.max.age.ms = 300000 23:16:20 kafka | [2024-03-15 23:13:54,634] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.216044957Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.935242ms 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | metadata.max.idle.ms = 300000 23:16:20 kafka | [2024-03-15 23:13:59,390] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.223544007Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 23:16:20 policy-db-migrator | 23:16:20 policy-pap | metric.reporters = [] 23:16:20 kafka | [2024-03-15 23:13:59,390] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.22957702Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=6.026923ms 23:16:20 policy-db-migrator | 23:16:20 policy-pap | metrics.num.samples = 2 23:16:20 kafka | [2024-03-15 23:14:22,183] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.233094333Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 23:16:20 policy-db-migrator | > upgrade 0220-sequence.sql 23:16:20 policy-pap | metrics.recording.level = INFO 23:16:20 kafka | [2024-03-15 23:14:22,191] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.234263401Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.167758ms 23:16:20 policy-pap | metrics.sample.window.ms = 30000 23:16:20 kafka | [2024-03-15 23:14:22,195] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 23:16:20 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.240326915Z level=info msg="Executing migration" id="create cache_data table" 23:16:20 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:20 kafka | [2024-03-15 23:14:22,203] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.2414024Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.074915ms 23:16:20 policy-pap | partitioner.availability.timeout.ms = 0 23:16:20 kafka | [2024-03-15 23:14:22,237] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(RYQK08lOSYaXD4Alb86gyg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(R2o1IzsbR_ucSKqMoC8FrA),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.248750955Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 23:16:20 policy-pap | partitioner.class = null 23:16:20 kafka | [2024-03-15 23:14:22,247] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.250107239Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.375124ms 23:16:20 policy-pap | partitioner.ignore.keys = false 23:16:20 kafka | [2024-03-15 23:14:22,253] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.2545119Z level=info msg="Executing migration" id="create short_url table v1" 23:16:20 policy-pap | receive.buffer.bytes = 32768 23:16:20 kafka | [2024-03-15 23:14:22,254] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.255506202Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=994.562µs 23:16:20 policy-pap | reconnect.backoff.max.ms = 1000 23:16:20 kafka | [2024-03-15 23:14:22,254] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.263427666Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 23:16:20 policy-pap | reconnect.backoff.ms = 50 23:16:20 kafka | [2024-03-15 23:14:22,254] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.264697997Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.278041ms 23:16:20 policy-pap | request.timeout.ms = 30000 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,254] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.268685715Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 23:16:20 policy-pap | retries = 2147483647 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:14:22,254] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.268781478Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=96.033µs 23:16:20 policy-pap | retry.backoff.ms = 100 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:14:22,254] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.271939239Z level=info msg="Executing migration" id="delete alert_definition table" 23:16:20 policy-pap | sasl.client.callback.handler.class = null 23:16:20 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 23:16:20 kafka | [2024-03-15 23:14:22,254] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.272116275Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=177.275µs 23:16:20 policy-pap | sasl.jaas.config = null 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,255] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.284581374Z level=info msg="Executing migration" id="recreate alert_definition table" 23:16:20 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:20 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 23:16:20 kafka | [2024-03-15 23:14:22,255] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.286441564Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.86545ms 23:16:20 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,255] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.293929764Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 23:16:20 policy-pap | sasl.kerberos.service.name = null 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:14:22,255] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.294970647Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.041113ms 23:16:20 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:14:22,255] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.298253533Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 23:16:20 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:20 policy-db-migrator | > upgrade 0120-toscatrigger.sql 23:16:20 kafka | [2024-03-15 23:14:22,257] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.299599086Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.349353ms 23:16:20 policy-pap | sasl.login.callback.handler.class = null 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,257] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.302822189Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 23:16:20 policy-pap | sasl.login.class = null 23:16:20 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 23:16:20 kafka | [2024-03-15 23:14:22,257] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.302976494Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=155.325µs 23:16:20 policy-pap | sasl.login.connect.timeout.ms = null 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,257] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.309872315Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 23:16:20 policy-pap | sasl.login.read.timeout.ms = null 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:14:22,257] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.311846179Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.969983ms 23:16:20 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:14:22,257] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.317532931Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 23:16:20 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:20 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 23:16:20 kafka | [2024-03-15 23:14:22,257] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.31905745Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.524739ms 23:16:20 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,258] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.324583107Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 23:16:20 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:20 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 23:16:20 kafka | [2024-03-15 23:14:22,258] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.326287742Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.701644ms 23:16:20 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,258] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.33339436Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 23:16:20 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:14:22,258] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.334949339Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.55553ms 23:16:20 policy-pap | sasl.mechanism = GSSAPI 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:14:22,258] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.340452346Z level=info msg="Executing migration" id="Add column paused in alert_definition" 23:16:20 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:20 policy-db-migrator | > upgrade 0140-toscaparameter.sql 23:16:20 kafka | [2024-03-15 23:14:22,259] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.353830405Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=13.378469ms 23:16:20 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,259] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.360147487Z level=info msg="Executing migration" id="drop alert_definition table" 23:16:20 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:20 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 23:16:20 kafka | [2024-03-15 23:14:22,259] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.361031646Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=884.239µs 23:16:20 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,259] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.370141998Z level=info msg="Executing migration" id="delete alert_definition_version table" 23:16:20 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:14:22,259] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.370272402Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=131.244µs 23:16:20 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:14:22,259] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.374822898Z level=info msg="Executing migration" id="recreate alert_definition_version table" 23:16:20 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:20 policy-db-migrator | > upgrade 0150-toscaproperty.sql 23:16:20 kafka | [2024-03-15 23:14:22,259] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.376383698Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.56164ms 23:16:20 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,260] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.380044225Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 23:16:20 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:20 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 23:16:20 kafka | [2024-03-15 23:14:22,260] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.381171582Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.126936ms 23:16:20 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,260] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.389923432Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 23:16:20 policy-pap | security.protocol = PLAINTEXT 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:14:22,260] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.390907834Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=983.892µs 23:16:20 policy-pap | security.providers = null 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,260] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.397076772Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 23:16:20 policy-pap | send.buffer.bytes = 131072 23:16:20 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 23:16:20 kafka | [2024-03-15 23:14:22,260] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.397181725Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=103.254µs 23:16:20 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,260] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.402132314Z level=info msg="Executing migration" id="drop alert_definition_version table" 23:16:20 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:14:22,261] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.403501168Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.368943ms 23:16:20 policy-pap | ssl.cipher.suites = null 23:16:20 kafka | [2024-03-15 23:14:22,263] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.416596268Z level=info msg="Executing migration" id="create alert_instance table" 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:20 kafka | [2024-03-15 23:14:22,263] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.417726174Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.129517ms 23:16:20 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 23:16:20 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:20 kafka | [2024-03-15 23:14:22,263] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.424918414Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | ssl.engine.factory.class = null 23:16:20 kafka | [2024-03-15 23:14:22,263] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.425965908Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.047034ms 23:16:20 policy-pap | ssl.key.password = null 23:16:20 kafka | [2024-03-15 23:14:22,263] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.435598117Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 23:16:20 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:20 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 23:16:20 kafka | [2024-03-15 23:14:22,263] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.436674021Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.077434ms 23:16:20 policy-pap | ssl.keystore.certificate.chain = null 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,263] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.4447261Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 23:16:20 policy-pap | ssl.keystore.key = null 23:16:20 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 23:16:20 kafka | [2024-03-15 23:14:22,263] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.452440617Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=7.743779ms 23:16:20 policy-pap | ssl.keystore.location = null 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,263] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.457356995Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 23:16:20 policy-pap | ssl.keystore.password = null 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:14:22,263] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.458636546Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.282212ms 23:16:20 policy-pap | ssl.keystore.type = JKS 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,264] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.463202642Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 23:16:20 policy-pap | ssl.protocol = TLSv1.3 23:16:20 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 23:16:20 kafka | [2024-03-15 23:14:22,264] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.464332608Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.130246ms 23:16:20 policy-pap | ssl.provider = null 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.469004888Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 23:16:20 policy-pap | ssl.secure.random.implementation = null 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.498525385Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=29.512656ms 23:16:20 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.506973526Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 23:16:20 policy-pap | ssl.truststore.certificates = null 23:16:20 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 23:16:20 kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.528666521Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=21.690846ms 23:16:20 policy-pap | ssl.truststore.location = null 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.534506068Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 23:16:20 policy-pap | ssl.truststore.password = null 23:16:20 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:20 kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.535338745Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=832.767µs 23:16:20 policy-pap | ssl.truststore.type = JKS 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.540912814Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 23:16:20 policy-pap | transaction.timeout.ms = 60000 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.541876065Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=963.081µs 23:16:20 policy-pap | transactional.id = null 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.547230016Z level=info msg="Executing migration" id="add current_reason column related to current_state" 23:16:20 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:20 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 23:16:20 kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.552983361Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.749965ms 23:16:20 policy-pap | 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.561792103Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 23:16:20 policy-pap | [2024-03-15T23:14:21.661+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.565816402Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.025559ms 23:16:20 policy-pap | [2024-03-15T23:14:21.680+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.571089922Z level=info msg="Executing migration" id="create alert_rule table" 23:16:20 policy-pap | [2024-03-15T23:14:21.680+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:20 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 23:16:20 kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.572154966Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.064365ms 23:16:20 policy-pap | [2024-03-15T23:14:21.680+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710544461680 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.577324061Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 23:16:20 policy-pap | [2024-03-15T23:14:21.681+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=ad60098f-8467-4f6a-8a6c-235480b406c4, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:20 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 23:16:20 kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.578657794Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.338233ms 23:16:20 policy-pap | [2024-03-15T23:14:21.681+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=6be4621a-d017-49e7-bcd8-e5e0cbe56c95, alive=false, publisher=null]]: starting 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.582240919Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 23:16:20 policy-pap | [2024-03-15T23:14:21.681+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.583313674Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.072674ms 23:16:20 policy-pap | acks = -1 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.589190902Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 23:16:20 policy-pap | auto.include.jmx.reporter = true 23:16:20 policy-db-migrator | > upgrade 0100-upgrade.sql 23:16:20 kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.590354549Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.163247ms 23:16:20 policy-pap | batch.size = 16384 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.596717323Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 23:16:20 policy-pap | bootstrap.servers = [kafka:9092] 23:16:20 policy-db-migrator | select 'upgrade to 1100 completed' as msg 23:16:20 kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.596786576Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=70.452µs 23:16:20 policy-pap | buffer.memory = 33554432 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.600025619Z level=info msg="Executing migration" id="add column for to alert_rule" 23:16:20 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.606417984Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.394885ms 23:16:20 policy-pap | client.id = producer-2 23:16:20 policy-db-migrator | msg 23:16:20 kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.612065335Z level=info msg="Executing migration" id="add column annotations to alert_rule" 23:16:20 policy-pap | compression.type = none 23:16:20 policy-db-migrator | upgrade to 1100 completed 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.619487563Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=7.424438ms 23:16:20 policy-pap | connections.max.idle.ms = 540000 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.627981336Z level=info msg="Executing migration" id="add column labels to alert_rule" 23:16:20 policy-pap | delivery.timeout.ms = 120000 23:16:20 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.634823975Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.841979ms 23:16:20 policy-pap | enable.idempotence = true 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.639176715Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 23:16:20 policy-pap | interceptor.classes = [] 23:16:20 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.639911238Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=734.633µs 23:16:20 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.644818696Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 23:16:20 policy-pap | linger.ms = 0 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.646683436Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.861479ms 23:16:20 policy-pap | max.block.ms = 60000 23:16:20 policy-db-migrator | 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.650327332Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 23:16:20 policy-pap | max.in.flight.requests.per.connection = 5 23:16:20 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.658870196Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=8.543424ms 23:16:20 policy-pap | max.request.size = 1048576 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.664757905Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 23:16:20 policy-pap | metadata.max.age.ms = 300000 23:16:20 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.66896167Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.203185ms 23:16:20 policy-pap | metadata.max.idle.ms = 300000 23:16:20 policy-db-migrator | -------------- 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.672491423Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 23:16:20 policy-pap | metric.reporters = [] 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.673458814Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=966.961µs 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 policy-pap | metrics.num.samples = 2 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.677936298Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 policy-pap | metrics.recording.level = INFO 23:16:20 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.687383151Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=9.450654ms 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 policy-pap | metrics.sample.window.ms = 30000 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.689802728Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.69421659Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.413032ms 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 policy-pap | partitioner.availability.timeout.ms = 0 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.699431207Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 policy-pap | partitioner.class = null 23:16:20 policy-db-migrator | > upgrade 0120-audit_sequence.sql 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.69951643Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=86.263µs 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 policy-pap | partitioner.ignore.keys = false 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.707276749Z level=info msg="Executing migration" id="create alert_rule_version table" 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.709111407Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.834499ms 23:16:20 policy-pap | receive.buffer.bytes = 32768 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.712777135Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 23:16:20 policy-pap | reconnect.backoff.max.ms = 1000 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.714687776Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.909921ms 23:16:20 policy-pap | reconnect.backoff.ms = 50 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.720053918Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 23:16:20 policy-pap | request.timeout.ms = 30000 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.721127703Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.073435ms 23:16:20 policy-pap | retries = 2147483647 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.725985888Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 23:16:20 policy-pap | retry.backoff.ms = 100 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.726085332Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=100.684µs 23:16:20 policy-pap | sasl.client.callback.handler.class = null 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.729599534Z level=info msg="Executing migration" id="add column for to alert_rule_version" 23:16:20 policy-pap | sasl.jaas.config = null 23:16:20 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 23:16:20 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.738835721Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=9.237386ms 23:16:20 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:20 kafka | [2024-03-15 23:14:22,279] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.743014235Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 23:16:20 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:20 kafka | [2024-03-15 23:14:22,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.750335499Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=7.318685ms 23:16:20 policy-pap | sasl.kerberos.service.name = null 23:16:20 kafka | [2024-03-15 23:14:22,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.754636637Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 23:16:20 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:20 kafka | [2024-03-15 23:14:22,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.759274236Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.637109ms 23:16:20 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:20 kafka | [2024-03-15 23:14:22,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.770850857Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 23:16:20 policy-pap | sasl.login.callback.handler.class = null 23:16:20 kafka | [2024-03-15 23:14:22,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.780196197Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=9.34581ms 23:16:20 policy-pap | sasl.login.class = null 23:16:20 kafka | [2024-03-15 23:14:22,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.784710062Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 23:16:20 policy-pap | sasl.login.connect.timeout.ms = null 23:16:20 kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.791316343Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.605582ms 23:16:20 policy-pap | sasl.login.read.timeout.ms = null 23:16:20 kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.794517946Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 23:16:20 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:20 kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | TRUNCATE TABLE sequence 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.794566638Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=48.992µs 23:16:20 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:20 kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.797482301Z level=info msg="Executing migration" id=create_alert_configuration_table 23:16:20 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:20 kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.798046099Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=563.508µs 23:16:20 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:20 kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.802465861Z level=info msg="Executing migration" id="Add column default in alert_configuration" 23:16:20 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:20 kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.813813675Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=11.349224ms 23:16:20 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:20 kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | -------------- 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.818786064Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 23:16:20 policy-pap | sasl.mechanism = GSSAPI 23:16:20 kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 23:16:20 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.818833856Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=47.922µs 23:16:20 kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.821590484Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 23:16:20 kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | 23:16:20 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.828096763Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.505689ms 23:16:20 kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.833550618Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 23:16:20 kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | DROP TABLE pdpstatistics 23:16:20 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.83455352Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.002962ms 23:16:20 kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.838149035Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 23:16:20 kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | 23:16:20 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.849789198Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=11.641103ms 23:16:20 kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | 23:16:20 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.858733705Z level=info msg="Executing migration" id=create_ngalert_configuration_table 23:16:20 kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:20 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.859995836Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=1.261401ms 23:16:20 kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.865233144Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 23:16:20 kafka | [2024-03-15 23:14:22,443] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 23:16:20 policy-pap | security.protocol = PLAINTEXT 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.866489074Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.257701ms 23:16:20 kafka | [2024-03-15 23:14:22,443] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | security.providers = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.870535094Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 23:16:20 kafka | [2024-03-15 23:14:22,443] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | 23:16:20 policy-pap | send.buffer.bytes = 131072 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.877248619Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.712995ms 23:16:20 kafka | [2024-03-15 23:14:22,443] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | 23:16:20 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.880456272Z level=info msg="Executing migration" id="create provenance_type table" 23:16:20 kafka | [2024-03-15 23:14:22,443] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 23:16:20 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.881232707Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=775.805µs 23:16:20 kafka | [2024-03-15 23:14:22,444] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | ssl.cipher.suites = null 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.88727191Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 23:16:20 kafka | [2024-03-15 23:14:22,444] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | DROP TABLE statistics_sequence 23:16:20 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.88852402Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.25197ms 23:16:20 kafka | [2024-03-15 23:14:22,444] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 policy-db-migrator | -------------- 23:16:20 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:20 kafka | [2024-03-15 23:14:22,444] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.895042439Z level=info msg="Executing migration" id="create alert_image table" 23:16:20 policy-db-migrator | 23:16:20 policy-pap | ssl.engine.factory.class = null 23:16:20 kafka | [2024-03-15 23:14:22,444] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.896265549Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.22876ms 23:16:20 policy-db-migrator | policyadmin: OK: upgrade (1300) 23:16:20 policy-pap | ssl.key.password = null 23:16:20 kafka | [2024-03-15 23:14:22,444] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.901192647Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 23:16:20 policy-db-migrator | name version 23:16:20 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:20 kafka | [2024-03-15 23:14:22,444] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.901966652Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=774.144µs 23:16:20 policy-db-migrator | policyadmin 1300 23:16:20 policy-pap | ssl.keystore.certificate.chain = null 23:16:20 kafka | [2024-03-15 23:14:22,444] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.907385855Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 23:16:20 policy-db-migrator | ID script operation from_version to_version tag success atTime 23:16:20 policy-pap | ssl.keystore.key = null 23:16:20 kafka | [2024-03-15 23:14:22,445] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.90752469Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=138.475µs 23:16:20 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:50 23:16:20 policy-pap | ssl.keystore.location = null 23:16:20 kafka | [2024-03-15 23:14:22,445] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.912538051Z level=info msg="Executing migration" id=create_alert_configuration_history_table 23:16:20 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:50 23:16:20 policy-pap | ssl.keystore.password = null 23:16:20 kafka | [2024-03-15 23:14:22,445] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.914287927Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.749586ms 23:16:20 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:50 23:16:20 policy-pap | ssl.keystore.type = JKS 23:16:20 kafka | [2024-03-15 23:14:22,445] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.918366027Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 23:16:20 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:50 23:16:20 policy-pap | ssl.protocol = TLSv1.3 23:16:20 kafka | [2024-03-15 23:14:22,445] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.919363619Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=997.542µs 23:16:20 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:50 23:16:20 policy-pap | ssl.provider = null 23:16:20 kafka | [2024-03-15 23:14:22,445] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.92466996Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 23:16:20 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:50 23:16:20 policy-pap | ssl.secure.random.implementation = null 23:16:20 kafka | [2024-03-15 23:14:22,445] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.925343871Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 23:16:20 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:50 23:16:20 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:20 kafka | [2024-03-15 23:14:22,445] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.928858414Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 23:16:20 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:50 23:16:20 policy-pap | ssl.truststore.certificates = null 23:16:20 kafka | [2024-03-15 23:14:22,445] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.929544116Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=685.322µs 23:16:20 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:50 23:16:20 policy-pap | ssl.truststore.location = null 23:16:20 kafka | [2024-03-15 23:14:22,446] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.934793164Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 23:16:20 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 23:16:20 policy-pap | ssl.truststore.password = null 23:16:20 kafka | [2024-03-15 23:14:22,446] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.936074535Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.281191ms 23:16:20 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 23:16:20 policy-pap | ssl.truststore.type = JKS 23:16:20 kafka | [2024-03-15 23:14:22,446] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.941805009Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 23:16:20 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 23:16:20 policy-pap | transaction.timeout.ms = 60000 23:16:20 kafka | [2024-03-15 23:14:22,446] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.951676996Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=9.872236ms 23:16:20 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 23:16:20 policy-pap | transactional.id = null 23:16:20 kafka | [2024-03-15 23:14:22,446] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.957528863Z level=info msg="Executing migration" id="create library_element table v1" 23:16:20 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 23:16:20 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:20 kafka | [2024-03-15 23:14:22,449] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.959456145Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.928112ms 23:16:20 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 23:16:20 policy-pap | 23:16:20 kafka | [2024-03-15 23:14:22,449] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.967179563Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 23:16:20 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 23:16:20 policy-pap | [2024-03-15T23:14:21.682+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 23:16:20 kafka | [2024-03-15 23:14:22,449] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.968392692Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.212468ms 23:16:20 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 23:16:20 policy-pap | [2024-03-15T23:14:21.685+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:20 kafka | [2024-03-15 23:14:22,449] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.971863003Z level=info msg="Executing migration" id="create library_element_connection table v1" 23:16:20 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 23:16:20 policy-pap | [2024-03-15T23:14:21.685+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:20 kafka | [2024-03-15 23:14:22,449] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.972782202Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=919.489µs 23:16:20 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 23:16:20 policy-pap | [2024-03-15T23:14:21.685+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710544461685 23:16:20 kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.97925461Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 23:16:20 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 23:16:20 policy-pap | [2024-03-15T23:14:21.685+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=6be4621a-d017-49e7-bcd8-e5e0cbe56c95, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:20 kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.980944254Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.689484ms 23:16:20 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 23:16:20 policy-pap | [2024-03-15T23:14:21.685+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 23:16:20 kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.989638803Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 23:16:20 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 23:16:20 policy-pap | [2024-03-15T23:14:21.685+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 23:16:20 kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.991231294Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.592641ms 23:16:20 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 23:16:20 policy-pap | [2024-03-15T23:14:21.687+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 23:16:20 kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.994929262Z level=info msg="Executing migration" id="increase max description length to 2048" 23:16:20 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 23:16:20 policy-pap | [2024-03-15T23:14:21.690+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.994967914Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=40.172µs 23:16:20 kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 23:16:20 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 23:16:20 policy-pap | [2024-03-15T23:14:21.693+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.998695843Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 23:16:20 kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 23:16:20 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 23:16:20 policy-pap | [2024-03-15T23:14:21.693+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:51.998770536Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=78.773µs 23:16:20 kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 23:16:20 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 23:16:20 policy-pap | [2024-03-15T23:14:21.693+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.004063178Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 23:16:20 kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 23:16:20 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 23:16:20 policy-pap | [2024-03-15T23:14:21.693+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.004515859Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=452.831µs 23:16:20 kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 23:16:20 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 23:16:20 policy-pap | [2024-03-15T23:14:21.693+00:00|INFO|TimerManager|Thread-9] timer manager update started 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.009910246Z level=info msg="Executing migration" id="create data_keys table" 23:16:20 kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 23:16:20 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 23:16:20 policy-pap | [2024-03-15T23:14:21.694+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.011172652Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.261766ms 23:16:20 kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 23:16:20 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 23:16:20 policy-pap | [2024-03-15T23:14:21.694+00:00|INFO|ServiceManager|main] Policy PAP started 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.014894537Z level=info msg="Executing migration" id="create secrets table" 23:16:20 kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 23:16:20 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 23:16:20 policy-pap | [2024-03-15T23:14:21.696+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.587 seconds (process running for 11.243) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.015880964Z level=info msg="Migration successfully executed" id="create secrets table" duration=986.117µs 23:16:20 kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 23:16:20 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 policy-pap | [2024-03-15T23:14:22.156+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: LbZnmjPNTK-gKtiXPvevcA 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.020373561Z level=info msg="Executing migration" id="rename data_keys name column to id" 23:16:20 kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 23:16:20 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 policy-pap | [2024-03-15T23:14:22.156+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.056326712Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=35.944731ms 23:16:20 kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 23:16:20 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 policy-pap | [2024-03-15T23:14:22.156+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Cluster ID: LbZnmjPNTK-gKtiXPvevcA 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.065064948Z level=info msg="Executing migration" id="add name column into data_keys" 23:16:20 kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 23:16:20 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 policy-pap | [2024-03-15T23:14:22.157+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: LbZnmjPNTK-gKtiXPvevcA 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.077028685Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=11.968707ms 23:16:20 kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 23:16:20 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 policy-pap | [2024-03-15T23:14:22.218+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.080604575Z level=info msg="Executing migration" id="copy data_keys id column values into name" 23:16:20 kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 23:16:20 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 policy-pap | [2024-03-15T23:14:22.218+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: LbZnmjPNTK-gKtiXPvevcA 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.080791431Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=186.255µs 23:16:20 kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 23:16:20 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 policy-pap | [2024-03-15T23:14:22.265+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.084591107Z level=info msg="Executing migration" id="rename data_keys name column to label" 23:16:20 kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 23:16:20 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 policy-pap | [2024-03-15T23:14:22.291+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.118355977Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=33.75713ms 23:16:20 kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:22.304+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.124812429Z level=info msg="Executing migration" id="rename data_keys id column back to name" 23:16:20 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:22.343+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.1540007Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=29.193461ms 23:16:20 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:22.392+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.164387752Z level=info msg="Executing migration" id="create kv_store table v1" 23:16:20 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 policy-pap | [2024-03-15T23:14:22.452+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:20 kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.166297896Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.910344ms 23:16:20 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:22.501+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:20 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.172205492Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 23:16:20 kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:22.559+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:20 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.173362655Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.156373ms 23:16:20 kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:22.606+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:20 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.179404315Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 23:16:20 kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:22.666+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:20 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.179796466Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=391.541µs 23:16:20 kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:22.712+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:20 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.184792486Z level=info msg="Executing migration" id="create permission table" 23:16:20 kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:22.771+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:20 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.186239847Z level=info msg="Migration successfully executed" id="create permission table" duration=1.447231ms 23:16:20 kafka | [2024-03-15 23:14:22,453] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:22.820+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:20 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.194909141Z level=info msg="Executing migration" id="add unique index permission.role_id" 23:16:20 kafka | [2024-03-15 23:14:22,453] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:22.878+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:20 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.196732572Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.829601ms 23:16:20 kafka | [2024-03-15 23:14:22,453] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:22.926+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:20 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.201035073Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 23:16:20 kafka | [2024-03-15 23:14:22,453] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:22.987+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:20 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.202115704Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.080351ms 23:16:20 kafka | [2024-03-15 23:14:22,453] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:23.035+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:20 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.213144904Z level=info msg="Executing migration" id="create role table" 23:16:20 kafka | [2024-03-15 23:14:22,453] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:23.098+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:20 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.21406886Z level=info msg="Migration successfully executed" id="create role table" duration=924.036µs 23:16:20 kafka | [2024-03-15 23:14:22,453] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:23.145+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:20 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.222624811Z level=info msg="Executing migration" id="add column display_name" 23:16:20 kafka | [2024-03-15 23:14:22,453] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:23.155+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] (Re-)joining group 23:16:20 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.230446451Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.8188ms 23:16:20 kafka | [2024-03-15 23:14:22,453] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:23.202+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:20 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.233858687Z level=info msg="Executing migration" id="add column group_name" 23:16:20 kafka | [2024-03-15 23:14:22,453] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:23.204+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:20 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.241008008Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.146971ms 23:16:20 kafka | [2024-03-15 23:14:22,453] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:23.222+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Request joining group due to: need to re-join with the given member-id: consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3-35a9ab49-163f-457d-aaa8-ddc8c3a1db0e 23:16:20 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.25104458Z level=info msg="Executing migration" id="add index role.org_id" 23:16:20 kafka | [2024-03-15 23:14:22,453] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:23.222+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:20 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 23:16:20 policy-pap | [2024-03-15T23:14:23.222+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-99fdea8c-1b20-42a4-83af-e5069d439442 23:16:20 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.25208563Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.037109ms 23:16:20 kafka | [2024-03-15 23:14:22,454] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:23.222+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:20 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.259169059Z level=info msg="Executing migration" id="add unique index role_org_id_name" 23:16:20 kafka | [2024-03-15 23:14:22,454] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:23.222+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:20 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.260183377Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.014008ms 23:16:20 kafka | [2024-03-15 23:14:22,454] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:23.223+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] (Re-)joining group 23:16:20 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.264460788Z level=info msg="Executing migration" id="add index role_org_id_uid" 23:16:20 kafka | [2024-03-15 23:14:22,461] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:26.254+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Successfully joined group with generation Generation{generationId=1, memberId='consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3-35a9ab49-163f-457d-aaa8-ddc8c3a1db0e', protocol='range'} 23:16:20 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.265482746Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.021598ms 23:16:20 kafka | [2024-03-15 23:14:22,467] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:26.260+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-99fdea8c-1b20-42a4-83af-e5069d439442', protocol='range'} 23:16:20 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.271128305Z level=info msg="Executing migration" id="create team role table" 23:16:20 kafka | [2024-03-15 23:14:22,469] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:26.271+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Finished assignment for group at generation 1: {consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3-35a9ab49-163f-457d-aaa8-ddc8c3a1db0e=Assignment(partitions=[policy-pdp-pap-0])} 23:16:20 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.27199628Z level=info msg="Migration successfully executed" id="create team role table" duration=865.875µs 23:16:20 kafka | [2024-03-15 23:14:22,469] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:26.271+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-99fdea8c-1b20-42a4-83af-e5069d439442=Assignment(partitions=[policy-pdp-pap-0])} 23:16:20 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.278005289Z level=info msg="Executing migration" id="add index team_role.org_id" 23:16:20 kafka | [2024-03-15 23:14:22,469] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:26.301+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Successfully synced group in generation Generation{generationId=1, memberId='consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3-35a9ab49-163f-457d-aaa8-ddc8c3a1db0e', protocol='range'} 23:16:20 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.279092549Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.08697ms 23:16:20 kafka | [2024-03-15 23:14:22,469] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:26.301+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-99fdea8c-1b20-42a4-83af-e5069d439442', protocol='range'} 23:16:20 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.289806361Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 23:16:20 kafka | [2024-03-15 23:14:22,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 23:16:20 policy-pap | [2024-03-15T23:14:26.302+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.290597093Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=790.372µs 23:16:20 kafka | [2024-03-15 23:14:22,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 23:16:20 policy-pap | [2024-03-15T23:14:26.303+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.301275003Z level=info msg="Executing migration" id="add index team_role.team_id" 23:16:20 kafka | [2024-03-15 23:14:22,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 23:16:20 policy-pap | [2024-03-15T23:14:26.310+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.302310782Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.035939ms 23:16:20 kafka | [2024-03-15 23:14:22,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 23:16:20 policy-pap | [2024-03-15T23:14:26.322+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Adding newly assigned partitions: policy-pdp-pap-0 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.310513703Z level=info msg="Executing migration" id="create user role table" 23:16:20 kafka | [2024-03-15 23:14:22,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 23:16:20 policy-pap | [2024-03-15T23:14:26.331+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.311654255Z level=info msg="Migration successfully executed" id="create user role table" duration=1.142242ms 23:16:20 kafka | [2024-03-15 23:14:22,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 23:16:20 policy-pap | [2024-03-15T23:14:26.333+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Found no committed offset for partition policy-pdp-pap-0 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.317706326Z level=info msg="Executing migration" id="add index user_role.org_id" 23:16:20 kafka | [2024-03-15 23:14:22,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 23:16:20 policy-pap | [2024-03-15T23:14:26.358+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.318885319Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.179203ms 23:16:20 kafka | [2024-03-15 23:14:22,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 23:16:20 policy-pap | [2024-03-15T23:14:26.360+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.322734977Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 23:16:20 kafka | [2024-03-15 23:14:22,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 23:16:20 policy-pap | [2024-03-15T23:14:28.653+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-4] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.323761956Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.027129ms 23:16:20 kafka | [2024-03-15 23:14:22,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 23:16:20 policy-pap | [2024-03-15T23:14:28.653+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Initializing Servlet 'dispatcherServlet' 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.329329983Z level=info msg="Executing migration" id="add index user_role.user_id" 23:16:20 kafka | [2024-03-15 23:14:22,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 23:16:20 policy-pap | [2024-03-15T23:14:28.655+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Completed initialization in 2 ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.330353331Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.023238ms 23:16:20 kafka | [2024-03-15 23:14:22,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 23:16:20 policy-pap | [2024-03-15T23:14:43.488+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.334381575Z level=info msg="Executing migration" id="create builtin role table" 23:16:20 kafka | [2024-03-15 23:14:22,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 23:16:20 policy-pap | [] 23:16:20 policy-pap | [2024-03-15T23:14:43.489+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:20 kafka | [2024-03-15 23:14:22,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 23:16:20 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ba446a9c-6622-41fc-a636-ab4cca84c30b","timestampMs":1710544483450,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup"} 23:16:20 kafka | [2024-03-15 23:14:22,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 23:16:20 policy-pap | [2024-03-15T23:14:43.489+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.335184847Z level=info msg="Migration successfully executed" id="create builtin role table" duration=805.122µs 23:16:20 kafka | [2024-03-15 23:14:22,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 23:16:20 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ba446a9c-6622-41fc-a636-ab4cca84c30b","timestampMs":1710544483450,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup"} 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.341906556Z level=info msg="Executing migration" id="add index builtin_role.role_id" 23:16:20 kafka | [2024-03-15 23:14:22,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 23:16:20 policy-pap | [2024-03-15T23:14:43.497+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.342921395Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.015159ms 23:16:20 kafka | [2024-03-15 23:14:22,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 23:16:20 policy-pap | [2024-03-15T23:14:43.598+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate starting 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.352080303Z level=info msg="Executing migration" id="add index builtin_role.name" 23:16:20 kafka | [2024-03-15 23:14:22,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 23:16:20 policy-pap | [2024-03-15T23:14:43.598+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate starting listener 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.352824814Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=742.78µs 23:16:20 kafka | [2024-03-15 23:14:22,472] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 23:16:20 policy-pap | [2024-03-15T23:14:43.598+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate starting timer 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.35981209Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 23:16:20 kafka | [2024-03-15 23:14:22,472] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 23:16:20 policy-pap | [2024-03-15T23:14:43.599+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=a918cf66-cf68-45ea-b4be-5105781f3d6f, expireMs=1710544513599] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.368555796Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.748096ms 23:16:20 kafka | [2024-03-15 23:14:22,472] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 23:16:20 policy-pap | [2024-03-15T23:14:43.601+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate starting enqueue 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.373522616Z level=info msg="Executing migration" id="add index builtin_role.org_id" 23:16:20 kafka | [2024-03-15 23:14:22,472] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 23:16:20 policy-pap | [2024-03-15T23:14:43.601+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=a918cf66-cf68-45ea-b4be-5105781f3d6f, expireMs=1710544513599] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.374768931Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.246775ms 23:16:20 kafka | [2024-03-15 23:14:22,472] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 23:16:20 policy-pap | [2024-03-15T23:14:43.601+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate started 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.38113429Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 23:16:20 kafka | [2024-03-15 23:14:22,472] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:54 23:16:20 policy-pap | [2024-03-15T23:14:43.603+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.382527399Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.392089ms 23:16:20 kafka | [2024-03-15 23:14:22,472] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:55 23:16:20 policy-pap | {"source":"pap-bc9b7321-9b51-42ef-97ab-0ee05971a3f1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"a918cf66-cf68-45ea-b4be-5105781f3d6f","timestampMs":1710544483578,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.394784704Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 23:16:20 kafka | [2024-03-15 23:14:22,472] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:55 23:16:20 policy-pap | [2024-03-15T23:14:43.640+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.395925056Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.138842ms 23:16:20 kafka | [2024-03-15 23:14:22,472] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:55 23:16:20 policy-pap | {"source":"pap-bc9b7321-9b51-42ef-97ab-0ee05971a3f1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"a918cf66-cf68-45ea-b4be-5105781f3d6f","timestampMs":1710544483578,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.401856293Z level=info msg="Executing migration" id="add unique index role.uid" 23:16:20 kafka | [2024-03-15 23:14:22,472] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:55 23:16:20 policy-pap | [2024-03-15T23:14:43.640+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.402986285Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.129562ms 23:16:20 kafka | [2024-03-15 23:14:22,473] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:55 23:16:20 policy-pap | [2024-03-15T23:14:43.642+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:20 kafka | [2024-03-15 23:14:22,473] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.407303766Z level=info msg="Executing migration" id="create seed assignment table" 23:16:20 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:55 23:16:20 policy-pap | {"source":"pap-bc9b7321-9b51-42ef-97ab-0ee05971a3f1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"a918cf66-cf68-45ea-b4be-5105781f3d6f","timestampMs":1710544483578,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:20 kafka | [2024-03-15 23:14:22,473] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.408253273Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=948.287µs 23:16:20 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:55 23:16:20 policy-pap | [2024-03-15T23:14:43.642+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.412103131Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 23:16:20 kafka | [2024-03-15 23:14:22,473] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:55 23:16:20 policy-pap | [2024-03-15T23:14:43.672+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.41382474Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.721309ms 23:16:20 kafka | [2024-03-15 23:14:22,473] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:55 23:16:20 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c8f034e9-82b5-4f8d-b347-826ceabb026b","timestampMs":1710544483657,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup"} 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.420682043Z level=info msg="Executing migration" id="add column hidden to role table" 23:16:20 kafka | [2024-03-15 23:14:22,473] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:55 23:16:20 policy-pap | [2024-03-15T23:14:43.673+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.428965496Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.287744ms 23:16:20 kafka | [2024-03-15 23:14:22,473] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:55 23:16:20 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c8f034e9-82b5-4f8d-b347-826ceabb026b","timestampMs":1710544483657,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup"} 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.433979637Z level=info msg="Executing migration" id="permission kind migration" 23:16:20 kafka | [2024-03-15 23:14:22,473] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:55 23:16:20 policy-pap | [2024-03-15T23:14:43.678+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.443591907Z level=info msg="Migration successfully executed" id="permission kind migration" duration=9.610081ms 23:16:20 kafka | [2024-03-15 23:14:22,473] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 1503242313501000u 1 2024-03-15 23:13:55 23:16:20 policy-pap | [2024-03-15T23:14:43.682+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.44901514Z level=info msg="Executing migration" id="permission attribute migration" 23:16:20 kafka | [2024-03-15 23:14:22,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 1503242313501000u 1 2024-03-15 23:13:55 23:16:20 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a918cf66-cf68-45ea-b4be-5105781f3d6f","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"dc56e0cf-4911-4e66-a485-4debe52e093d","timestampMs":1710544483663,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.457378835Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.365886ms 23:16:20 kafka | [2024-03-15 23:14:22,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 1503242313501000u 1 2024-03-15 23:13:55 23:16:20 policy-pap | [2024-03-15T23:14:43.701+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate stopping 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.460945035Z level=info msg="Executing migration" id="permission identifier migration" 23:16:20 kafka | [2024-03-15 23:14:22,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 1503242313501000u 1 2024-03-15 23:13:55 23:16:20 policy-pap | [2024-03-15T23:14:43.702+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate stopping enqueue 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.466539393Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.633459ms 23:16:20 kafka | [2024-03-15 23:14:22,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 1503242313501000u 1 2024-03-15 23:13:55 23:16:20 policy-pap | [2024-03-15T23:14:43.702+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate stopping timer 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.47889192Z level=info msg="Executing migration" id="add permission identifier index" 23:16:20 kafka | [2024-03-15 23:14:22,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 1503242313501000u 1 2024-03-15 23:13:55 23:16:20 policy-pap | [2024-03-15T23:14:43.702+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=a918cf66-cf68-45ea-b4be-5105781f3d6f, expireMs=1710544513599] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.479894488Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.005268ms 23:16:20 kafka | [2024-03-15 23:14:22,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.702+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate stopping listener 23:16:20 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 1503242313501000u 1 2024-03-15 23:13:55 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.484114077Z level=info msg="Executing migration" id="add permission action scope role_id index" 23:16:20 kafka | [2024-03-15 23:14:22,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.702+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate stopped 23:16:20 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 1503242313501000u 1 2024-03-15 23:13:55 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.485155136Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.041099ms 23:16:20 kafka | [2024-03-15 23:14:22,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.709+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:20 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 1503242313501000u 1 2024-03-15 23:13:56 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.490587379Z level=info msg="Executing migration" id="remove permission role_id action scope index" 23:16:20 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a918cf66-cf68-45ea-b4be-5105781f3d6f","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"dc56e0cf-4911-4e66-a485-4debe52e093d","timestampMs":1710544483663,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:20 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 1503242313501100u 1 2024-03-15 23:13:56 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.491374851Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=784.892µs 23:16:20 kafka | [2024-03-15 23:14:22,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.709+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate successful 23:16:20 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 1503242313501200u 1 2024-03-15 23:13:56 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.494207151Z level=info msg="Executing migration" id="create query_history table v1" 23:16:20 kafka | [2024-03-15 23:14:22,475] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.709+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 start publishing next request 23:16:20 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 1503242313501200u 1 2024-03-15 23:13:56 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.49488984Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=683.179µs 23:16:20 kafka | [2024-03-15 23:14:22,477] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.709+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpStateChange starting 23:16:20 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 1503242313501200u 1 2024-03-15 23:13:56 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.50553148Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 23:16:20 kafka | [2024-03-15 23:14:22,478] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.709+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpStateChange starting listener 23:16:20 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 1503242313501200u 1 2024-03-15 23:13:56 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.506386064Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=854.464µs 23:16:20 kafka | [2024-03-15 23:14:22,478] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.710+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpStateChange starting timer 23:16:20 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 1503242313501300u 1 2024-03-15 23:13:56 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.509910203Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 23:16:20 kafka | [2024-03-15 23:14:22,478] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.710+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=5b704fa0-786f-426e-ab49-de6046b0a817, expireMs=1710544513710] 23:16:20 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 1503242313501300u 1 2024-03-15 23:13:56 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.509958594Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=48.821µs 23:16:20 kafka | [2024-03-15 23:14:22,478] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.710+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id a918cf66-cf68-45ea-b4be-5105781f3d6f 23:16:20 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 1503242313501300u 1 2024-03-15 23:13:56 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.512740062Z level=info msg="Executing migration" id="rbac disabled migrator" 23:16:20 kafka | [2024-03-15 23:14:22,478] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.710+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpStateChange starting enqueue 23:16:20 policy-db-migrator | policyadmin: OK @ 1300 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.512778303Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=39.021µs 23:16:20 policy-pap | [2024-03-15T23:14:43.710+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpStateChange started 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.515814259Z level=info msg="Executing migration" id="teams permissions migration" 23:16:20 kafka | [2024-03-15 23:14:22,478] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.516288212Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=474.633µs 23:16:20 policy-pap | [2024-03-15T23:14:43.710+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=5b704fa0-786f-426e-ab49-de6046b0a817, expireMs=1710544513710] 23:16:20 kafka | [2024-03-15 23:14:22,478] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.711+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.52153017Z level=info msg="Executing migration" id="dashboard permissions" 23:16:20 kafka | [2024-03-15 23:14:22,478] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | {"source":"pap-bc9b7321-9b51-42ef-97ab-0ee05971a3f1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"5b704fa0-786f-426e-ab49-de6046b0a817","timestampMs":1710544483579,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.522215809Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=687.089µs 23:16:20 kafka | [2024-03-15 23:14:22,478] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.726+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.525137431Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 23:16:20 policy-pap | {"source":"pap-bc9b7321-9b51-42ef-97ab-0ee05971a3f1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"5b704fa0-786f-426e-ab49-de6046b0a817","timestampMs":1710544483579,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.525880062Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=743.001µs 23:16:20 kafka | [2024-03-15 23:14:22,478] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.729+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.53042238Z level=info msg="Executing migration" id="drop managed folder create actions" 23:16:20 policy-pap | [2024-03-15T23:14:43.734+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:20 kafka | [2024-03-15 23:14:22,478] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.530634686Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=212.846µs 23:16:20 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"5b704fa0-786f-426e-ab49-de6046b0a817","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"f008dd50-9471-4f36-80d6-f78aa5ec5aec","timestampMs":1710544483724,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:20 kafka | [2024-03-15 23:14:22,478] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.533568318Z level=info msg="Executing migration" id="alerting notification permissions" 23:16:20 policy-pap | [2024-03-15T23:14:43.735+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 5b704fa0-786f-426e-ab49-de6046b0a817 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.534064332Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=496.044µs 23:16:20 kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.762+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.541035398Z level=info msg="Executing migration" id="create query_history_star table v1" 23:16:20 kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | {"source":"pap-bc9b7321-9b51-42ef-97ab-0ee05971a3f1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"5b704fa0-786f-426e-ab49-de6046b0a817","timestampMs":1710544483579,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.543173259Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=2.13684ms 23:16:20 kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.762+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.550291119Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 23:16:20 kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.767+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.55139871Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.107791ms 23:16:20 kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"5b704fa0-786f-426e-ab49-de6046b0a817","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"f008dd50-9471-4f36-80d6-f78aa5ec5aec","timestampMs":1710544483724,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.556824723Z level=info msg="Executing migration" id="add column org_id in query_history_star" 23:16:20 kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.768+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpStateChange stopping 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.564760986Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=7.936433ms 23:16:20 kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.768+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpStateChange stopping enqueue 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.570986151Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 23:16:20 kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.768+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpStateChange stopping timer 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.571055103Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=70.052µs 23:16:20 kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.768+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=5b704fa0-786f-426e-ab49-de6046b0a817, expireMs=1710544513710] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.573861842Z level=info msg="Executing migration" id="create correlation table v1" 23:16:20 kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.769+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpStateChange stopping listener 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.574915152Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.052699ms 23:16:20 kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.769+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpStateChange stopped 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.580366575Z level=info msg="Executing migration" id="add index correlations.uid" 23:16:20 kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.769+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpStateChange successful 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.583632377Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=3.264512ms 23:16:20 kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.769+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 start publishing next request 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.590363556Z level=info msg="Executing migration" id="add index correlations.source_uid" 23:16:20 kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.769+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate starting 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.591524199Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.165173ms 23:16:20 kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.769+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate starting listener 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.596084407Z level=info msg="Executing migration" id="add correlation config column" 23:16:20 kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.770+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate starting timer 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.602405295Z level=info msg="Migration successfully executed" id="add correlation config column" duration=6.320408ms 23:16:20 kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:43.770+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=d2465129-9ed1-4fca-970a-e7296db7245c, expireMs=1710544513770] 23:16:20 kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.606307395Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 23:16:20 policy-pap | [2024-03-15T23:14:43.770+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate starting enqueue 23:16:20 kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.607112107Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=804.692µs 23:16:20 policy-pap | [2024-03-15T23:14:43.770+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:20 kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.610295887Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 23:16:20 policy-pap | {"source":"pap-bc9b7321-9b51-42ef-97ab-0ee05971a3f1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"d2465129-9ed1-4fca-970a-e7296db7245c","timestampMs":1710544483752,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:20 kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.612809418Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=2.51025ms 23:16:20 policy-pap | [2024-03-15T23:14:43.771+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate started 23:16:20 kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.622636254Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 23:16:20 policy-pap | [2024-03-15T23:14:43.781+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:20 kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.6448705Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=22.234365ms 23:16:20 policy-pap | {"source":"pap-bc9b7321-9b51-42ef-97ab-0ee05971a3f1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"d2465129-9ed1-4fca-970a-e7296db7245c","timestampMs":1710544483752,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:20 kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.651429594Z level=info msg="Executing migration" id="create correlation v2" 23:16:20 policy-pap | [2024-03-15T23:14:43.782+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:20 kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.65270711Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.276876ms 23:16:20 policy-pap | [2024-03-15T23:14:43.782+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:20 kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.658364679Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 23:16:20 policy-pap | {"source":"pap-bc9b7321-9b51-42ef-97ab-0ee05971a3f1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"d2465129-9ed1-4fca-970a-e7296db7245c","timestampMs":1710544483752,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:20 kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.660013556Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.641756ms 23:16:20 policy-pap | [2024-03-15T23:14:43.782+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:20 kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.664374368Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 23:16:20 policy-pap | [2024-03-15T23:14:43.792+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:20 kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.666331233Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.956755ms 23:16:20 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"d2465129-9ed1-4fca-970a-e7296db7245c","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ae64ed1a-2bd0-452c-a2a3-d83350bdbf1d","timestampMs":1710544483781,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:20 kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.671350215Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 23:16:20 policy-pap | [2024-03-15T23:14:43.792+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:20 kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.672546258Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.200164ms 23:16:20 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"d2465129-9ed1-4fca-970a-e7296db7245c","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ae64ed1a-2bd0-452c-a2a3-d83350bdbf1d","timestampMs":1710544483781,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:20 kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.676486859Z level=info msg="Executing migration" id="copy correlation v1 to v2" 23:16:20 policy-pap | [2024-03-15T23:14:43.793+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate stopping 23:16:20 kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.677114047Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=625.988µs 23:16:20 policy-pap | [2024-03-15T23:14:43.793+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id d2465129-9ed1-4fca-970a-e7296db7245c 23:16:20 kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.683832386Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 23:16:20 policy-pap | [2024-03-15T23:14:43.793+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate stopping enqueue 23:16:20 kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.685167973Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.334887ms 23:16:20 policy-pap | [2024-03-15T23:14:43.793+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate stopping timer 23:16:20 kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.691212703Z level=info msg="Executing migration" id="add provisioning column" 23:16:20 policy-pap | [2024-03-15T23:14:43.793+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=d2465129-9ed1-4fca-970a-e7296db7245c, expireMs=1710544513770] 23:16:20 kafka | [2024-03-15 23:14:22,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.701695798Z level=info msg="Migration successfully executed" id="add provisioning column" duration=10.483815ms 23:16:20 policy-pap | [2024-03-15T23:14:43.793+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate stopping listener 23:16:20 kafka | [2024-03-15 23:14:22,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.705216567Z level=info msg="Executing migration" id="create entity_events table" 23:16:20 policy-pap | [2024-03-15T23:14:43.793+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate stopped 23:16:20 kafka | [2024-03-15 23:14:22,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.706319608Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.102231ms 23:16:20 policy-pap | [2024-03-15T23:14:43.802+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate successful 23:16:20 kafka | [2024-03-15 23:14:22,517] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.709642012Z level=info msg="Executing migration" id="create dashboard public config v1" 23:16:20 policy-pap | [2024-03-15T23:14:43.803+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 has no more requests 23:16:20 kafka | [2024-03-15 23:14:22,517] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.710874846Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.232074ms 23:16:20 policy-pap | [2024-03-15T23:14:49.283+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:20 kafka | [2024-03-15 23:14:22,521] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.715788475Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:20 policy-pap | [2024-03-15T23:14:49.290+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:20 kafka | [2024-03-15 23:14:22,521] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.71702998Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:20 policy-pap | [2024-03-15T23:14:49.676+00:00|INFO|SessionData|http-nio-6969-exec-7] unknown group testGroup 23:16:20 kafka | [2024-03-15 23:14:22,521] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.720670932Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:20 policy-pap | [2024-03-15T23:14:50.243+00:00|INFO|SessionData|http-nio-6969-exec-7] create cached group testGroup 23:16:20 kafka | [2024-03-15 23:14:22,521] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.721275029Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:20 policy-pap | [2024-03-15T23:14:50.243+00:00|INFO|SessionData|http-nio-6969-exec-7] creating DB group testGroup 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.727322959Z level=info msg="Executing migration" id="Drop old dashboard public config table" 23:16:20 kafka | [2024-03-15 23:14:22,521] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:50.749+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.728827511Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.503882ms 23:16:20 kafka | [2024-03-15 23:14:22,521] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:50.980+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.738223886Z level=info msg="Executing migration" id="recreate dashboard public config v1" 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:51.060+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.740119629Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.895093ms 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:51.060+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.74548272Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:51.060+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.747565819Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=2.079438ms 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:51.074+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-03-15T23:14:50Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-03-15T23:14:51Z, user=policyadmin)] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.751663144Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:51.762+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.752925549Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.261935ms 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:51.763+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.758762794Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:51.764+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.760539874Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.773219ms 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:51.764+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.764389972Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:51.764+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.766304816Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.913974ms 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:51.777+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-03-15T23:14:51Z, user=policyadmin)] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.774097495Z level=info msg="Executing migration" id="Drop public config table" 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:52.126+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group defaultGroup 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.77499201Z level=info msg="Migration successfully executed" id="Drop public config table" duration=894.045µs 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:52.126+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.781180034Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:52.126+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.783126369Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.948975ms 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:52.126+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.786847564Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:52.127+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.788025087Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.177513ms 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:52.127+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.792487032Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:14:52.139+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-03-15T23:14:52Z, user=policyadmin)] 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.79452192Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.033438ms 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:15:12.739+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.798761009Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:15:12.741+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.800484127Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.723928ms 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:20 policy-pap | [2024-03-15T23:15:13.600+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=a918cf66-cf68-45ea-b4be-5105781f3d6f, expireMs=1710544513599] 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.805485638Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 23:16:20 policy-pap | [2024-03-15T23:15:13.711+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=5b704fa0-786f-426e-ab49-de6046b0a817, expireMs=1710544513710] 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.829938786Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=24.456898ms 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.834772722Z level=info msg="Executing migration" id="add annotations_enabled column" 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.84323829Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.464948ms 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.847045977Z level=info msg="Executing migration" id="add time_selection_enabled column" 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.853795737Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.74867ms 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.8599436Z level=info msg="Executing migration" id="delete orphaned public dashboards" 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.8602848Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=341.15µs 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.865960289Z level=info msg="Executing migration" id="add share column" 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.874917661Z level=info msg="Migration successfully executed" id="add share column" duration=8.957182ms 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.878464181Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.878840632Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=376.441µs 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.882454583Z level=info msg="Executing migration" id="create file table" 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.883577675Z level=info msg="Migration successfully executed" id="create file table" duration=1.123032ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.888538385Z level=info msg="Executing migration" id="file table idx: path natural pk" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.889777829Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.239684ms 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.893666849Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.895048338Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.380759ms 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.901363245Z level=info msg="Executing migration" id="create file_meta table" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.902321092Z level=info msg="Migration successfully executed" id="create file_meta table" duration=955.037µs 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.909183955Z level=info msg="Executing migration" id="file table idx: path key" 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.911070368Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.886323ms 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.91646572Z level=info msg="Executing migration" id="set path collation in file table" 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.916600664Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=111.653µs 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.92036814Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 23:16:20 kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.920539055Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=170.555µs 23:16:20 kafka | [2024-03-15 23:14:22,524] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.924162327Z level=info msg="Executing migration" id="managed permissions migration" 23:16:20 kafka | [2024-03-15 23:14:22,524] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.925116624Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=953.887µs 23:16:20 kafka | [2024-03-15 23:14:22,579] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.92962407Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 23:16:20 kafka | [2024-03-15 23:14:22,590] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.92996905Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=344.22µs 23:16:20 kafka | [2024-03-15 23:14:22,592] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.935234588Z level=info msg="Executing migration" id="RBAC action name migrator" 23:16:20 kafka | [2024-03-15 23:14:22,593] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.937282626Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=2.047688ms 23:16:20 kafka | [2024-03-15 23:14:22,594] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.942602876Z level=info msg="Executing migration" id="Add UID column to playlist" 23:16:20 kafka | [2024-03-15 23:14:22,607] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.951763823Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.160278ms 23:16:20 kafka | [2024-03-15 23:14:22,608] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.955931581Z level=info msg="Executing migration" id="Update uid column values in playlist" 23:16:20 kafka | [2024-03-15 23:14:22,608] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.956203078Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=271.618µs 23:16:20 kafka | [2024-03-15 23:14:22,608] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.962922007Z level=info msg="Executing migration" id="Add index for uid in playlist" 23:16:20 kafka | [2024-03-15 23:14:22,608] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.964910023Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.987566ms 23:16:20 kafka | [2024-03-15 23:14:22,617] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.969216514Z level=info msg="Executing migration" id="update group index for alert rules" 23:16:20 kafka | [2024-03-15 23:14:22,618] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.969583795Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=368.721µs 23:16:20 kafka | [2024-03-15 23:14:22,618] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.973584637Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 23:16:20 kafka | [2024-03-15 23:14:22,618] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.973834224Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=249.267µs 23:16:20 kafka | [2024-03-15 23:14:22,618] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.984605027Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 23:16:20 kafka | [2024-03-15 23:14:22,626] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.985599345Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=1.000548ms 23:16:20 kafka | [2024-03-15 23:14:22,626] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:52.990275547Z level=info msg="Executing migration" id="add action column to seed_assignment" 23:16:20 kafka | [2024-03-15 23:14:22,626] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.000184035Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.902518ms 23:16:20 kafka | [2024-03-15 23:14:22,626] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.003914957Z level=info msg="Executing migration" id="add scope column to seed_assignment" 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.013006814Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=9.091447ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.018322974Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 23:16:20 kafka | [2024-03-15 23:14:22,627] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.019178062Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=854.797µs 23:16:20 kafka | [2024-03-15 23:14:22,632] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.025514564Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 23:16:20 kafka | [2024-03-15 23:14:22,633] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 kafka | [2024-03-15 23:14:22,633] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,633] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.103296569Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=77.783875ms 23:16:20 kafka | [2024-03-15 23:14:22,633] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.110885651Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 23:16:20 kafka | [2024-03-15 23:14:22,640] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.112200443Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.319892ms 23:16:20 kafka | [2024-03-15 23:14:22,641] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.11928422Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 23:16:20 kafka | [2024-03-15 23:14:22,641] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.120531279Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.243809ms 23:16:20 kafka | [2024-03-15 23:14:22,641] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.124421224Z level=info msg="Executing migration" id="add primary key to seed_assigment" 23:16:20 kafka | [2024-03-15 23:14:22,641] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.150838058Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=26.413333ms 23:16:20 kafka | [2024-03-15 23:14:22,661] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.156425196Z level=info msg="Executing migration" id="add origin column to seed_assignment" 23:16:20 kafka | [2024-03-15 23:14:22,664] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.163689758Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.264342ms 23:16:20 kafka | [2024-03-15 23:14:22,664] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.177993365Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 23:16:20 kafka | [2024-03-15 23:14:22,664] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.178809481Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=820.996µs 23:16:20 kafka | [2024-03-15 23:14:22,664] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.183401708Z level=info msg="Executing migration" id="prevent seeding OnCall access" 23:16:20 kafka | [2024-03-15 23:14:22,680] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.183861683Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=456.754µs 23:16:20 kafka | [2024-03-15 23:14:22,681] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.188079127Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 23:16:20 kafka | [2024-03-15 23:14:22,681] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.188375077Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=295.36µs 23:16:20 kafka | [2024-03-15 23:14:22,682] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.196040922Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 23:16:20 kafka | [2024-03-15 23:14:22,682] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.196368742Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=327.96µs 23:16:20 kafka | [2024-03-15 23:14:22,691] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.203546082Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 23:16:20 kafka | [2024-03-15 23:14:22,694] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.204290885Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=750.464µs 23:16:20 kafka | [2024-03-15 23:14:22,694] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.209477381Z level=info msg="Executing migration" id="create folder table" 23:16:20 kafka | [2024-03-15 23:14:22,694] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.21132907Z level=info msg="Migration successfully executed" id="create folder table" duration=1.853959ms 23:16:20 kafka | [2024-03-15 23:14:22,694] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.215196604Z level=info msg="Executing migration" id="Add index for parent_uid" 23:16:20 kafka | [2024-03-15 23:14:22,702] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.21664008Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.443086ms 23:16:20 kafka | [2024-03-15 23:14:22,703] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.221916828Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 23:16:20 kafka | [2024-03-15 23:14:22,703] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.223299483Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.382685ms 23:16:20 kafka | [2024-03-15 23:14:22,703] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.227145115Z level=info msg="Executing migration" id="Update folder title length" 23:16:20 kafka | [2024-03-15 23:14:22,703] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.227262829Z level=info msg="Migration successfully executed" id="Update folder title length" duration=118.824µs 23:16:20 kafka | [2024-03-15 23:14:22,713] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.231919438Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 23:16:20 kafka | [2024-03-15 23:14:22,713] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.234028835Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=2.109207ms 23:16:20 kafka | [2024-03-15 23:14:22,714] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.238023003Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 23:16:20 kafka | [2024-03-15 23:14:22,714] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.239284903Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.26256ms 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.246677169Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 23:16:20 kafka | [2024-03-15 23:14:22,714] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.248420015Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.742526ms 23:16:20 kafka | [2024-03-15 23:14:22,720] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.254545691Z level=info msg="Executing migration" id="Sync dashboard and folder table" 23:16:20 kafka | [2024-03-15 23:14:22,720] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.255129979Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=584.678µs 23:16:20 kafka | [2024-03-15 23:14:22,720] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.258642182Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 23:16:20 kafka | [2024-03-15 23:14:22,720] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.259025674Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=383.772µs 23:16:20 kafka | [2024-03-15 23:14:22,720] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.263800166Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 23:16:20 kafka | [2024-03-15 23:14:22,727] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.265608694Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.808258ms 23:16:20 kafka | [2024-03-15 23:14:22,727] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.270376657Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 23:16:20 kafka | [2024-03-15 23:14:22,727] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.272265907Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.889961ms 23:16:20 kafka | [2024-03-15 23:14:22,727] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.276082109Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 23:16:20 kafka | [2024-03-15 23:14:22,727] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.277314748Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.232719ms 23:16:20 kafka | [2024-03-15 23:14:22,735] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.281851223Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 23:16:20 kafka | [2024-03-15 23:14:22,736] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.283873508Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=2.021125ms 23:16:20 kafka | [2024-03-15 23:14:22,736] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.290534551Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 23:16:20 kafka | [2024-03-15 23:14:22,736] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.291855233Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.321523ms 23:16:20 kafka | [2024-03-15 23:14:22,736] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.297484513Z level=info msg="Executing migration" id="create anon_device table" 23:16:20 kafka | [2024-03-15 23:14:22,743] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.298561637Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.076984ms 23:16:20 kafka | [2024-03-15 23:14:22,743] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.302361538Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 23:16:20 kafka | [2024-03-15 23:14:22,743] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.303852776Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.493808ms 23:16:20 kafka | [2024-03-15 23:14:22,743] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,743] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:22,749] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.309341481Z level=info msg="Executing migration" id="add index anon_device.updated_at" 23:16:20 kafka | [2024-03-15 23:14:22,749] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.311412618Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=2.071616ms 23:16:20 kafka | [2024-03-15 23:14:22,749] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.316654045Z level=info msg="Executing migration" id="create signing_key table" 23:16:20 kafka | [2024-03-15 23:14:22,749] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.318311838Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.657683ms 23:16:20 kafka | [2024-03-15 23:14:22,749] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.322368098Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 23:16:20 kafka | [2024-03-15 23:14:22,756] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.323680519Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.312202ms 23:16:20 kafka | [2024-03-15 23:14:22,757] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.330553129Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 23:16:20 kafka | [2024-03-15 23:14:22,757] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.333178563Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=2.620224ms 23:16:20 kafka | [2024-03-15 23:14:22,757] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.34123988Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 23:16:20 kafka | [2024-03-15 23:14:22,757] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.341713836Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=474.696µs 23:16:20 kafka | [2024-03-15 23:14:22,763] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.345446085Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 23:16:20 kafka | [2024-03-15 23:14:22,763] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.358531693Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=13.086118ms 23:16:20 kafka | [2024-03-15 23:14:22,763] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.363002206Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 23:16:20 kafka | [2024-03-15 23:14:22,763] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.363917015Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=919.059µs 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.367358525Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 23:16:20 kafka | [2024-03-15 23:14:22,764] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:22,781] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.369298627Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.936832ms 23:16:20 kafka | [2024-03-15 23:14:22,781] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.381722334Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 23:16:20 kafka | [2024-03-15 23:14:22,781] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.38378657Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=2.029554ms 23:16:20 kafka | [2024-03-15 23:14:22,781] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.388619334Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 23:16:20 kafka | [2024-03-15 23:14:22,781] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.39004912Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.430756ms 23:16:20 kafka | [2024-03-15 23:14:22,788] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.394000996Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 23:16:20 kafka | [2024-03-15 23:14:22,789] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.395314438Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.313462ms 23:16:20 kafka | [2024-03-15 23:14:22,789] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.399246304Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 23:16:20 kafka | [2024-03-15 23:14:22,789] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.401540107Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=2.297564ms 23:16:20 kafka | [2024-03-15 23:14:22,789] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.405456122Z level=info msg="Executing migration" id="create sso_setting table" 23:16:20 kafka | [2024-03-15 23:14:22,796] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.407743745Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=2.281933ms 23:16:20 kafka | [2024-03-15 23:14:22,797] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.412831098Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 23:16:20 kafka | [2024-03-15 23:14:22,797] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.413740717Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=910.229µs 23:16:20 kafka | [2024-03-15 23:14:22,797] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.420834413Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 23:16:20 kafka | [2024-03-15 23:14:22,797] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.421382431Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=552.308µs 23:16:20 kafka | [2024-03-15 23:14:22,804] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.427969091Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 23:16:20 kafka | [2024-03-15 23:14:22,805] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.428134766Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=164.495µs 23:16:20 kafka | [2024-03-15 23:14:22,805] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.431926658Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 23:16:20 kafka | [2024-03-15 23:14:22,805] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.443105395Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=11.179137ms 23:16:20 kafka | [2024-03-15 23:14:22,805] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.449194969Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 23:16:20 kafka | [2024-03-15 23:14:22,813] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.45891738Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.720951ms 23:16:20 kafka | [2024-03-15 23:14:22,814] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.467744772Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 23:16:20 kafka | [2024-03-15 23:14:22,814] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.468181286Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=436.824µs 23:16:20 kafka | [2024-03-15 23:14:22,814] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=migrator t=2024-03-15T23:13:53.47144051Z level=info msg="migrations completed" performed=547 skipped=0 duration=4.112204977s 23:16:20 kafka | [2024-03-15 23:14:22,814] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=sqlstore t=2024-03-15T23:13:53.481925275Z level=info msg="Created default admin" user=admin 23:16:20 kafka | [2024-03-15 23:14:22,826] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=sqlstore t=2024-03-15T23:13:53.48238634Z level=info msg="Created default organization" 23:16:20 kafka | [2024-03-15 23:14:22,828] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=secrets t=2024-03-15T23:13:53.488576197Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 23:16:20 kafka | [2024-03-15 23:14:22,828] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 23:16:20 grafana | logger=plugin.store t=2024-03-15T23:13:53.509152545Z level=info msg="Loading plugins..." 23:16:20 kafka | [2024-03-15 23:14:22,828] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=local.finder t=2024-03-15T23:13:53.551316162Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 23:16:20 kafka | [2024-03-15 23:14:22,828] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=plugin.store t=2024-03-15T23:13:53.551350483Z level=info msg="Plugins loaded" count=55 duration=42.199738ms 23:16:20 kafka | [2024-03-15 23:14:22,836] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=query_data t=2024-03-15T23:13:53.55813338Z level=info msg="Query Service initialization" 23:16:20 kafka | [2024-03-15 23:14:22,836] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=live.push_http t=2024-03-15T23:13:53.561831788Z level=info msg="Live Push Gateway initialization" 23:16:20 kafka | [2024-03-15 23:14:22,836] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 23:16:20 grafana | logger=ngalert.migration t=2024-03-15T23:13:53.56721181Z level=info msg=Starting 23:16:20 kafka | [2024-03-15 23:14:22,836] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=ngalert.migration t=2024-03-15T23:13:53.567642783Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 23:16:20 kafka | [2024-03-15 23:14:22,836] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=ngalert.migration orgID=1 t=2024-03-15T23:13:53.568230842Z level=info msg="Migrating alerts for organisation" 23:16:20 kafka | [2024-03-15 23:14:22,849] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=ngalert.migration orgID=1 t=2024-03-15T23:13:53.568868903Z level=info msg="Alerts found to migrate" alerts=0 23:16:20 kafka | [2024-03-15 23:14:22,849] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=ngalert.migration t=2024-03-15T23:13:53.570836725Z level=info msg="Completed alerting migration" 23:16:20 kafka | [2024-03-15 23:14:22,849] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,849] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=ngalert.state.manager t=2024-03-15T23:13:53.598173469Z level=info msg="Running in alternative execution of Error/NoData mode" 23:16:20 kafka | [2024-03-15 23:14:22,849] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=infra.usagestats.collector t=2024-03-15T23:13:53.599951896Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 23:16:20 kafka | [2024-03-15 23:14:22,855] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=provisioning.datasources t=2024-03-15T23:13:53.601725771Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 23:16:20 kafka | [2024-03-15 23:14:22,855] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=provisioning.alerting t=2024-03-15T23:13:53.614811099Z level=info msg="starting to provision alerting" 23:16:20 kafka | [2024-03-15 23:14:22,855] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 23:16:20 grafana | logger=provisioning.alerting t=2024-03-15T23:13:53.61482731Z level=info msg="finished to provision alerting" 23:16:20 kafka | [2024-03-15 23:14:22,856] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=ngalert.state.manager t=2024-03-15T23:13:53.615066977Z level=info msg="Warming state cache for startup" 23:16:20 kafka | [2024-03-15 23:14:22,856] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=ngalert.multiorg.alertmanager t=2024-03-15T23:13:53.615248193Z level=info msg="Starting MultiOrg Alertmanager" 23:16:20 kafka | [2024-03-15 23:14:22,863] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=ngalert.state.manager t=2024-03-15T23:13:53.615591794Z level=info msg="State cache has been initialized" states=0 duration=525.377µs 23:16:20 kafka | [2024-03-15 23:14:22,863] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=ngalert.scheduler t=2024-03-15T23:13:53.615640506Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 23:16:20 kafka | [2024-03-15 23:14:22,864] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 23:16:20 grafana | logger=ticker t=2024-03-15T23:13:53.615708068Z level=info msg=starting first_tick=2024-03-15T23:14:00Z 23:16:20 kafka | [2024-03-15 23:14:22,864] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=grafanaStorageLogger t=2024-03-15T23:13:53.616906196Z level=info msg="Storage starting" 23:16:20 kafka | [2024-03-15 23:14:22,864] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=http.server t=2024-03-15T23:13:53.618464056Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 23:16:20 kafka | [2024-03-15 23:14:22,869] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=provisioning.dashboard t=2024-03-15T23:13:53.654511198Z level=info msg="starting to provision dashboards" 23:16:20 kafka | [2024-03-15 23:14:22,870] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=sqlstore.transactions t=2024-03-15T23:13:53.67118173Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:20 kafka | [2024-03-15 23:14:22,870] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 23:16:20 grafana | logger=sqlstore.transactions t=2024-03-15T23:13:53.681660815Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 23:16:20 kafka | [2024-03-15 23:14:22,870] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=plugins.update.checker t=2024-03-15T23:13:53.708179012Z level=info msg="Update check succeeded" duration=93.203948ms 23:16:20 kafka | [2024-03-15 23:14:22,870] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 grafana | logger=grafana.update.checker t=2024-03-15T23:13:53.72502925Z level=info msg="Update check succeeded" duration=110.062236ms 23:16:20 kafka | [2024-03-15 23:14:22,879] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 grafana | logger=provisioning.dashboard t=2024-03-15T23:13:53.969403907Z level=info msg="finished to provision dashboards" 23:16:20 kafka | [2024-03-15 23:14:22,880] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 grafana | logger=grafana-apiserver t=2024-03-15T23:13:54.219888137Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 23:16:20 kafka | [2024-03-15 23:14:22,881] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 23:16:20 grafana | logger=grafana-apiserver t=2024-03-15T23:13:54.220306479Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 23:16:20 kafka | [2024-03-15 23:14:22,881] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 grafana | logger=infra.usagestats t=2024-03-15T23:14:39.629423581Z level=info msg="Usage stats are ready to report" 23:16:20 kafka | [2024-03-15 23:14:22,881] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:22,887] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 kafka | [2024-03-15 23:14:22,888] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 kafka | [2024-03-15 23:14:22,888] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,888] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,888] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:22,895] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 kafka | [2024-03-15 23:14:22,896] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 kafka | [2024-03-15 23:14:22,896] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,896] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,896] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:22,903] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 kafka | [2024-03-15 23:14:22,903] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 kafka | [2024-03-15 23:14:22,904] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,904] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,904] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:22,910] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 kafka | [2024-03-15 23:14:22,911] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 kafka | [2024-03-15 23:14:22,911] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,911] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,911] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:22,918] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 kafka | [2024-03-15 23:14:22,919] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 23:16:20 kafka | [2024-03-15 23:14:22,919] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,919] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,919] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(RYQK08lOSYaXD4Alb86gyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:22,926] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 kafka | [2024-03-15 23:14:22,927] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 kafka | [2024-03-15 23:14:22,927] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,927] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,927] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:22,934] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 kafka | [2024-03-15 23:14:22,935] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 kafka | [2024-03-15 23:14:22,935] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,935] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,935] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:22,942] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 kafka | [2024-03-15 23:14:22,943] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 kafka | [2024-03-15 23:14:22,943] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,943] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,943] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:22,952] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 kafka | [2024-03-15 23:14:22,958] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 kafka | [2024-03-15 23:14:22,958] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,958] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,958] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:22,967] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 kafka | [2024-03-15 23:14:22,968] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 kafka | [2024-03-15 23:14:22,968] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,968] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,968] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:22,977] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 kafka | [2024-03-15 23:14:22,977] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 kafka | [2024-03-15 23:14:22,977] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,977] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,977] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:22,983] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 kafka | [2024-03-15 23:14:22,984] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 kafka | [2024-03-15 23:14:22,984] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,984] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,984] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:22,992] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 kafka | [2024-03-15 23:14:22,993] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 kafka | [2024-03-15 23:14:22,993] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,993] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:22,993] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:22,999] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 kafka | [2024-03-15 23:14:23,000] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 kafka | [2024-03-15 23:14:23,000] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:23,000] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:23,001] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,008] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 kafka | [2024-03-15 23:14:23,008] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 kafka | [2024-03-15 23:14:23,008] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:23,008] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:23,008] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,016] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 kafka | [2024-03-15 23:14:23,017] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 kafka | [2024-03-15 23:14:23,017] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:23,017] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:23,017] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,025] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 kafka | [2024-03-15 23:14:23,025] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 kafka | [2024-03-15 23:14:23,025] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:23,025] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:23,025] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,032] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 kafka | [2024-03-15 23:14:23,033] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 kafka | [2024-03-15 23:14:23,033] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:23,033] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:23,033] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,040] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 kafka | [2024-03-15 23:14:23,041] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 kafka | [2024-03-15 23:14:23,041] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:23,041] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:23,041] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,048] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 kafka | [2024-03-15 23:14:23,048] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 kafka | [2024-03-15 23:14:23,048] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:23,048] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:23,048] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,054] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:20 kafka | [2024-03-15 23:14:23,055] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:20 kafka | [2024-03-15 23:14:23,055] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:23,055] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 23:16:20 kafka | [2024-03-15 23:14:23,055] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,066] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,070] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,071] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,071] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,071] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,071] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,071] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,071] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,071] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,076] INFO [Broker id=1] Finished LeaderAndIsr request in 603ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,077] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 6 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,080] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,080] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,080] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,080] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,081] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,081] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,081] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,081] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,081] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,081] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,082] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,083] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,083] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=R2o1IzsbR_ucSKqMoC8FrA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=RYQK08lOSYaXD4Alb86gyg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,084] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,084] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,084] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,084] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,084] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,084] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,085] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,085] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,085] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,085] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,085] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,086] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,086] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,086] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,086] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,086] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,086] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,087] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,087] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,089] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 17 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,089] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,089] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,090] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 18 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,090] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,090] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,090] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,090] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,090] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,091] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,091] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,091] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,091] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,091] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,091] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,092] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,092] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,092] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:20 kafka | [2024-03-15 23:14:23,097] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,101] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,102] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:20 kafka | [2024-03-15 23:14:23,214] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group a833d76c-6968-4ee8-9b4d-b3fefbf07611 in Empty state. Created a new member id consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3-35a9ab49-163f-457d-aaa8-ddc8c3a1db0e and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,214] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-99fdea8c-1b20-42a4-83af-e5069d439442 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,237] INFO [GroupCoordinator 1]: Preparing to rebalance group a833d76c-6968-4ee8-9b4d-b3fefbf07611 in state PreparingRebalance with old generation 0 (__consumer_offsets-44) (reason: Adding new member consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3-35a9ab49-163f-457d-aaa8-ddc8c3a1db0e with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,237] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-99fdea8c-1b20-42a4-83af-e5069d439442 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,840] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 2f21b508-fe17-4ab8-9275-1762b58c9ac3 in Empty state. Created a new member id consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2-e5946d81-a534-498f-907f-81e67fc41f70 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:23,848] INFO [GroupCoordinator 1]: Preparing to rebalance group 2f21b508-fe17-4ab8-9275-1762b58c9ac3 in state PreparingRebalance with old generation 0 (__consumer_offsets-5) (reason: Adding new member consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2-e5946d81-a534-498f-907f-81e67fc41f70 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:26,251] INFO [GroupCoordinator 1]: Stabilized group a833d76c-6968-4ee8-9b4d-b3fefbf07611 generation 1 (__consumer_offsets-44) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:26,258] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:26,283] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-99fdea8c-1b20-42a4-83af-e5069d439442 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:26,283] INFO [GroupCoordinator 1]: Assignment received from leader consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3-35a9ab49-163f-457d-aaa8-ddc8c3a1db0e for group a833d76c-6968-4ee8-9b4d-b3fefbf07611 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:26,850] INFO [GroupCoordinator 1]: Stabilized group 2f21b508-fe17-4ab8-9275-1762b58c9ac3 generation 1 (__consumer_offsets-5) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:20 kafka | [2024-03-15 23:14:26,871] INFO [GroupCoordinator 1]: Assignment received from leader consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2-e5946d81-a534-498f-907f-81e67fc41f70 for group 2f21b508-fe17-4ab8-9275-1762b58c9ac3 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:20 ++ echo 'Tearing down containers...' 23:16:20 Tearing down containers... 23:16:20 ++ docker-compose down -v --remove-orphans 23:16:20 Stopping policy-apex-pdp ... 23:16:20 Stopping policy-pap ... 23:16:20 Stopping policy-api ... 23:16:20 Stopping grafana ... 23:16:20 Stopping kafka ... 23:16:20 Stopping mariadb ... 23:16:20 Stopping simulator ... 23:16:20 Stopping compose_zookeeper_1 ... 23:16:20 Stopping prometheus ... 23:16:21 Stopping grafana ... done 23:16:21 Stopping prometheus ... done 23:16:31 Stopping policy-apex-pdp ... done 23:16:41 Stopping policy-pap ... done 23:16:41 Stopping simulator ... done 23:16:42 Stopping mariadb ... done 23:16:42 Stopping kafka ... done 23:16:43 Stopping compose_zookeeper_1 ... done 23:16:52 Stopping policy-api ... done 23:16:52 Removing policy-apex-pdp ... 23:16:52 Removing policy-pap ... 23:16:52 Removing policy-api ... 23:16:52 Removing grafana ... 23:16:52 Removing kafka ... 23:16:52 Removing policy-db-migrator ... 23:16:52 Removing mariadb ... 23:16:52 Removing simulator ... 23:16:52 Removing compose_zookeeper_1 ... 23:16:52 Removing prometheus ... 23:16:52 Removing policy-api ... done 23:16:52 Removing policy-apex-pdp ... done 23:16:52 Removing policy-db-migrator ... done 23:16:52 Removing simulator ... done 23:16:52 Removing grafana ... done 23:16:52 Removing kafka ... done 23:16:52 Removing prometheus ... done 23:16:52 Removing policy-pap ... done 23:16:52 Removing mariadb ... done 23:16:52 Removing compose_zookeeper_1 ... done 23:16:52 Removing network compose_default 23:16:52 ++ cd /w/workspace/policy-pap-master-project-csit-pap 23:16:52 + load_set 23:16:52 + _setopts=hxB 23:16:52 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:16:52 ++ tr : ' ' 23:16:52 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:52 + set +o braceexpand 23:16:52 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:52 + set +o hashall 23:16:52 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:52 + set +o interactive-comments 23:16:52 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:52 + set +o xtrace 23:16:52 ++ echo hxB 23:16:52 ++ sed 's/./& /g' 23:16:52 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:52 + set +h 23:16:52 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:52 + set +x 23:16:52 + [[ -n /tmp/tmp.Xn1lruRwEW ]] 23:16:52 + rsync -av /tmp/tmp.Xn1lruRwEW/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:16:52 sending incremental file list 23:16:52 ./ 23:16:52 log.html 23:16:52 output.xml 23:16:52 report.html 23:16:52 testplan.txt 23:16:52 23:16:52 sent 919,289 bytes received 95 bytes 1,838,768.00 bytes/sec 23:16:52 total size is 918,743 speedup is 1.00 23:16:52 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:16:52 + exit 1 23:16:52 Build step 'Execute shell' marked build as failure 23:16:52 $ ssh-agent -k 23:16:52 unset SSH_AUTH_SOCK; 23:16:52 unset SSH_AGENT_PID; 23:16:52 echo Agent pid 2078 killed; 23:16:52 [ssh-agent] Stopped. 23:16:52 Robot results publisher started... 23:16:52 INFO: Checking test criticality is deprecated and will be dropped in a future release! 23:16:52 -Parsing output xml: 23:16:53 Done! 23:16:53 WARNING! Could not find file: **/log.html 23:16:53 WARNING! Could not find file: **/report.html 23:16:53 -Copying log files to build dir: 23:16:53 Done! 23:16:53 -Assigning results to build: 23:16:53 Done! 23:16:53 -Checking thresholds: 23:16:53 Done! 23:16:53 Done publishing Robot results. 23:16:53 [PostBuildScript] - [INFO] Executing post build scripts. 23:16:53 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins9688890317075254486.sh 23:16:53 ---> sysstat.sh 23:16:53 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10598062073148952752.sh 23:16:53 ---> package-listing.sh 23:16:53 ++ facter osfamily 23:16:53 ++ tr '[:upper:]' '[:lower:]' 23:16:53 + OS_FAMILY=debian 23:16:53 + workspace=/w/workspace/policy-pap-master-project-csit-pap 23:16:53 + START_PACKAGES=/tmp/packages_start.txt 23:16:53 + END_PACKAGES=/tmp/packages_end.txt 23:16:53 + DIFF_PACKAGES=/tmp/packages_diff.txt 23:16:53 + PACKAGES=/tmp/packages_start.txt 23:16:53 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:16:53 + PACKAGES=/tmp/packages_end.txt 23:16:53 + case "${OS_FAMILY}" in 23:16:53 + dpkg -l 23:16:53 + grep '^ii' 23:16:54 + '[' -f /tmp/packages_start.txt ']' 23:16:54 + '[' -f /tmp/packages_end.txt ']' 23:16:54 + diff /tmp/packages_start.txt /tmp/packages_end.txt 23:16:54 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:16:54 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:16:54 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:16:54 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1620010647216354571.sh 23:16:54 ---> capture-instance-metadata.sh 23:16:54 Setup pyenv: 23:16:54 system 23:16:54 3.8.13 23:16:54 3.9.13 23:16:54 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:16:54 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-48nb from file:/tmp/.os_lf_venv 23:16:55 lf-activate-venv(): INFO: Installing: lftools 23:17:05 lf-activate-venv(): INFO: Adding /tmp/venv-48nb/bin to PATH 23:17:05 INFO: Running in OpenStack, capturing instance metadata 23:17:05 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1491707428034886176.sh 23:17:05 provisioning config files... 23:17:05 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config11676880199612686298tmp 23:17:05 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 23:17:05 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 23:17:05 [EnvInject] - Injecting environment variables from a build step. 23:17:05 [EnvInject] - Injecting as environment variables the properties content 23:17:05 SERVER_ID=logs 23:17:05 23:17:05 [EnvInject] - Variables injected successfully. 23:17:05 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins15389883869151244807.sh 23:17:05 ---> create-netrc.sh 23:17:05 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins6936332521640930828.sh 23:17:05 ---> python-tools-install.sh 23:17:05 Setup pyenv: 23:17:05 system 23:17:05 3.8.13 23:17:05 3.9.13 23:17:05 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:06 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-48nb from file:/tmp/.os_lf_venv 23:17:07 lf-activate-venv(): INFO: Installing: lftools 23:17:15 lf-activate-venv(): INFO: Adding /tmp/venv-48nb/bin to PATH 23:17:15 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2811132065335310372.sh 23:17:15 ---> sudo-logs.sh 23:17:15 Archiving 'sudo' log.. 23:17:15 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2940540099925184987.sh 23:17:15 ---> job-cost.sh 23:17:15 Setup pyenv: 23:17:15 system 23:17:15 3.8.13 23:17:15 3.9.13 23:17:15 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:15 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-48nb from file:/tmp/.os_lf_venv 23:17:17 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 23:17:21 lf-activate-venv(): INFO: Adding /tmp/venv-48nb/bin to PATH 23:17:21 INFO: No Stack... 23:17:21 INFO: Retrieving Pricing Info for: v3-standard-8 23:17:22 INFO: Archiving Costs 23:17:22 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins11354269088158233846.sh 23:17:22 ---> logs-deploy.sh 23:17:22 Setup pyenv: 23:17:22 system 23:17:22 3.8.13 23:17:22 3.9.13 23:17:22 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:22 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-48nb from file:/tmp/.os_lf_venv 23:17:23 lf-activate-venv(): INFO: Installing: lftools 23:17:32 lf-activate-venv(): INFO: Adding /tmp/venv-48nb/bin to PATH 23:17:32 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1611 23:17:32 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 23:17:33 Archives upload complete. 23:17:33 INFO: archiving logs to Nexus 23:17:34 ---> uname -a: 23:17:34 Linux prd-ubuntu1804-docker-8c-8g-13424 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 23:17:34 23:17:34 23:17:34 ---> lscpu: 23:17:34 Architecture: x86_64 23:17:34 CPU op-mode(s): 32-bit, 64-bit 23:17:34 Byte Order: Little Endian 23:17:34 CPU(s): 8 23:17:34 On-line CPU(s) list: 0-7 23:17:34 Thread(s) per core: 1 23:17:34 Core(s) per socket: 1 23:17:34 Socket(s): 8 23:17:34 NUMA node(s): 1 23:17:34 Vendor ID: AuthenticAMD 23:17:34 CPU family: 23 23:17:34 Model: 49 23:17:34 Model name: AMD EPYC-Rome Processor 23:17:34 Stepping: 0 23:17:34 CPU MHz: 2800.000 23:17:34 BogoMIPS: 5600.00 23:17:34 Virtualization: AMD-V 23:17:34 Hypervisor vendor: KVM 23:17:34 Virtualization type: full 23:17:34 L1d cache: 32K 23:17:34 L1i cache: 32K 23:17:34 L2 cache: 512K 23:17:34 L3 cache: 16384K 23:17:34 NUMA node0 CPU(s): 0-7 23:17:34 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 23:17:34 23:17:34 23:17:34 ---> nproc: 23:17:34 8 23:17:34 23:17:34 23:17:34 ---> df -h: 23:17:34 Filesystem Size Used Avail Use% Mounted on 23:17:34 udev 16G 0 16G 0% /dev 23:17:34 tmpfs 3.2G 708K 3.2G 1% /run 23:17:34 /dev/vda1 155G 14G 142G 9% / 23:17:34 tmpfs 16G 0 16G 0% /dev/shm 23:17:34 tmpfs 5.0M 0 5.0M 0% /run/lock 23:17:34 tmpfs 16G 0 16G 0% /sys/fs/cgroup 23:17:34 /dev/vda15 105M 4.4M 100M 5% /boot/efi 23:17:34 tmpfs 3.2G 0 3.2G 0% /run/user/1001 23:17:34 23:17:34 23:17:34 ---> free -m: 23:17:34 total used free shared buff/cache available 23:17:34 Mem: 32167 858 24866 0 6441 30852 23:17:34 Swap: 1023 0 1023 23:17:34 23:17:34 23:17:34 ---> ip addr: 23:17:34 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 23:17:34 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 23:17:34 inet 127.0.0.1/8 scope host lo 23:17:34 valid_lft forever preferred_lft forever 23:17:34 inet6 ::1/128 scope host 23:17:34 valid_lft forever preferred_lft forever 23:17:34 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 23:17:34 link/ether fa:16:3e:1a:ce:12 brd ff:ff:ff:ff:ff:ff 23:17:34 inet 10.30.107.209/23 brd 10.30.107.255 scope global dynamic ens3 23:17:34 valid_lft 85967sec preferred_lft 85967sec 23:17:34 inet6 fe80::f816:3eff:fe1a:ce12/64 scope link 23:17:34 valid_lft forever preferred_lft forever 23:17:34 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 23:17:34 link/ether 02:42:14:ae:a1:6f brd ff:ff:ff:ff:ff:ff 23:17:34 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 23:17:34 valid_lft forever preferred_lft forever 23:17:34 23:17:34 23:17:34 ---> sar -b -r -n DEV: 23:17:34 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-13424) 03/15/24 _x86_64_ (8 CPU) 23:17:34 23:17:34 23:10:24 LINUX RESTART (8 CPU) 23:17:34 23:17:34 23:11:01 tps rtps wtps bread/s bwrtn/s 23:17:34 23:12:01 121.41 39.81 81.60 1813.83 27762.71 23:17:34 23:13:01 131.13 23.11 108.02 2766.74 33297.52 23:17:34 23:14:01 548.63 13.00 535.63 796.17 168179.10 23:17:34 23:15:01 33.96 0.47 33.49 34.13 26939.23 23:17:34 23:16:01 16.56 0.00 16.56 0.00 21042.84 23:17:34 23:17:01 68.17 0.92 67.26 49.33 23184.49 23:17:34 Average: 153.31 12.88 140.43 910.03 50067.65 23:17:34 23:17:34 23:11:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:17:34 23:12:01 30077480 31668912 2861740 8.69 70288 1831696 1424148 4.19 901128 1667928 158272 23:17:34 23:13:01 28482120 31641676 4457100 13.53 105288 3315136 1593772 4.69 1011416 3052624 1302884 23:17:34 23:14:01 24484332 30655996 8454888 25.67 157760 6116164 7404776 21.79 2152420 5697360 172 23:17:34 23:15:01 23238468 29526652 9700752 29.45 159324 6227464 8849140 26.04 3343376 5737800 220 23:17:34 23:16:01 23221784 29510860 9717436 29.50 159448 6228040 8849652 26.04 3361056 5737756 284 23:17:34 23:17:01 25436552 31558324 7502668 22.78 160652 6076568 1571116 4.62 1356812 5589880 3052 23:17:34 Average: 25823456 30760403 7115764 21.60 135460 4965845 4948767 14.56 2021035 4580558 244147 23:17:34 23:17:34 23:11:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:17:34 23:12:01 lo 1.73 1.73 0.18 0.18 0.00 0.00 0.00 0.00 23:17:34 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:34 23:12:01 ens3 173.79 112.63 1048.02 40.09 0.00 0.00 0.00 0.00 23:17:34 23:13:01 lo 7.00 7.00 0.65 0.65 0.00 0.00 0.00 0.00 23:17:34 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:34 23:13:01 br-ecaf75b0a48b 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:34 23:13:01 ens3 226.65 147.83 6510.28 15.21 0.00 0.00 0.00 0.00 23:17:34 23:14:01 vethe55f17e 0.45 0.70 0.05 0.30 0.00 0.00 0.00 0.00 23:17:34 23:14:01 lo 6.60 6.60 0.66 0.66 0.00 0.00 0.00 0.00 23:17:34 23:14:01 veth2fef3ae 29.88 38.39 2.96 4.51 0.00 0.00 0.00 0.00 23:17:34 23:14:01 vethae330f7 0.53 0.67 0.03 0.04 0.00 0.00 0.00 0.00 23:17:34 23:15:01 vethe55f17e 0.13 0.25 0.01 0.01 0.00 0.00 0.00 0.00 23:17:34 23:15:01 lo 5.07 5.07 3.51 3.51 0.00 0.00 0.00 0.00 23:17:34 23:15:01 veth2fef3ae 75.90 88.85 74.11 26.86 0.00 0.00 0.00 0.01 23:17:34 23:15:01 vethae330f7 45.89 40.69 10.79 36.41 0.00 0.00 0.00 0.00 23:17:34 23:16:01 vethe55f17e 0.15 0.05 0.01 0.00 0.00 0.00 0.00 0.00 23:17:34 23:16:01 lo 4.82 4.82 0.36 0.36 0.00 0.00 0.00 0.00 23:17:34 23:16:01 veth2fef3ae 1.50 1.72 0.54 0.39 0.00 0.00 0.00 0.00 23:17:34 23:16:01 vethae330f7 8.67 11.45 2.09 1.32 0.00 0.00 0.00 0.00 23:17:34 23:17:01 lo 5.43 5.43 0.48 0.48 0.00 0.00 0.00 0.00 23:17:34 23:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:34 23:17:01 ens3 1680.80 1050.91 35260.45 173.00 0.00 0.00 0.00 0.00 23:17:34 Average: lo 5.11 5.11 0.97 0.97 0.00 0.00 0.00 0.00 23:17:34 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:34 Average: ens3 243.68 147.91 5775.89 23.26 0.00 0.00 0.00 0.00 23:17:34 23:17:34 23:17:34 ---> sar -P ALL: 23:17:34 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-13424) 03/15/24 _x86_64_ (8 CPU) 23:17:34 23:17:34 23:10:24 LINUX RESTART (8 CPU) 23:17:34 23:17:34 23:11:01 CPU %user %nice %system %iowait %steal %idle 23:17:34 23:12:01 all 10.56 0.00 0.77 2.25 0.04 86.38 23:17:34 23:12:01 0 4.05 0.00 0.47 0.55 0.02 94.91 23:17:34 23:12:01 1 13.70 0.00 1.03 0.53 0.05 84.68 23:17:34 23:12:01 2 0.68 0.00 0.30 0.38 0.07 98.56 23:17:34 23:12:01 3 0.55 0.00 0.42 13.50 0.03 85.49 23:17:34 23:12:01 4 12.11 0.00 0.90 0.67 0.02 86.30 23:17:34 23:12:01 5 33.52 0.00 1.64 1.99 0.07 62.79 23:17:34 23:12:01 6 16.68 0.00 0.98 0.30 0.03 82.00 23:17:34 23:12:01 7 3.22 0.00 0.40 0.12 0.02 96.25 23:17:34 23:13:01 all 11.45 0.00 2.18 2.40 0.04 83.93 23:17:34 23:13:01 0 6.08 0.00 2.10 0.55 0.03 91.24 23:17:34 23:13:01 1 19.55 0.00 2.60 1.69 0.05 76.11 23:17:34 23:13:01 2 7.09 0.00 1.57 2.36 0.03 88.95 23:17:34 23:13:01 3 3.84 0.00 1.93 9.44 0.05 84.74 23:17:34 23:13:01 4 5.03 0.00 1.59 0.03 0.03 93.31 23:17:34 23:13:01 5 29.98 0.00 3.49 1.95 0.07 64.51 23:17:34 23:13:01 6 15.39 0.00 2.48 0.75 0.03 81.34 23:17:34 23:13:01 7 4.75 0.00 1.68 2.39 0.02 91.16 23:17:34 23:14:01 all 19.93 0.00 6.34 3.68 0.08 69.97 23:17:34 23:14:01 0 23.72 0.00 7.23 1.00 0.08 67.97 23:17:34 23:14:01 1 23.01 0.00 6.31 1.00 0.10 69.57 23:17:34 23:14:01 2 20.50 0.00 5.56 2.47 0.07 71.40 23:17:34 23:14:01 3 21.83 0.00 6.23 1.19 0.08 70.67 23:17:34 23:14:01 4 20.24 0.00 6.30 2.53 0.07 70.86 23:17:34 23:14:01 5 17.51 0.00 5.82 3.23 0.09 73.36 23:17:34 23:14:01 6 15.90 0.00 7.47 14.22 0.10 62.31 23:17:34 23:14:01 7 16.71 0.00 5.77 3.81 0.07 73.64 23:17:34 23:15:01 all 22.78 0.00 2.02 0.29 0.07 74.84 23:17:34 23:15:01 0 23.76 0.00 1.94 0.03 0.07 74.20 23:17:34 23:15:01 1 19.19 0.00 1.86 0.02 0.05 78.89 23:17:34 23:15:01 2 23.80 0.00 2.49 0.02 0.07 73.63 23:17:34 23:15:01 3 21.62 0.00 1.79 0.15 0.08 76.36 23:17:34 23:15:01 4 27.29 0.00 2.16 0.02 0.07 70.47 23:17:34 23:15:01 5 20.35 0.00 1.56 0.02 0.07 78.01 23:17:34 23:15:01 6 30.03 0.00 2.59 0.23 0.07 67.07 23:17:34 23:15:01 7 16.18 0.00 1.84 1.86 0.07 80.06 23:17:34 23:16:01 all 1.48 0.00 0.16 1.11 0.04 97.20 23:17:34 23:16:01 0 0.63 0.00 0.12 0.00 0.02 99.23 23:17:34 23:16:01 1 1.37 0.00 0.20 0.12 0.05 98.26 23:17:34 23:16:01 2 0.95 0.00 0.15 0.00 0.03 98.86 23:17:34 23:16:01 3 1.98 0.00 0.28 0.03 0.10 97.61 23:17:34 23:16:01 4 0.90 0.00 0.08 0.00 0.03 98.98 23:17:34 23:16:01 5 3.11 0.00 0.17 0.00 0.07 96.66 23:17:34 23:16:01 6 1.47 0.00 0.12 0.12 0.03 98.26 23:17:34 23:16:01 7 1.45 0.00 0.20 8.61 0.03 89.70 23:17:34 23:17:01 all 3.39 0.00 0.62 1.36 0.04 94.58 23:17:34 23:17:01 0 2.66 0.00 0.70 0.03 0.05 96.56 23:17:34 23:17:01 1 1.24 0.00 0.64 0.35 0.03 97.74 23:17:34 23:17:01 2 2.36 0.00 0.67 0.15 0.05 96.78 23:17:34 23:17:01 3 1.81 0.00 0.55 0.12 0.03 97.49 23:17:34 23:17:01 4 12.75 0.00 0.67 0.38 0.05 86.15 23:17:34 23:17:01 5 3.09 0.00 0.50 0.25 0.03 96.13 23:17:34 23:17:01 6 1.92 0.00 0.65 1.32 0.03 96.07 23:17:34 23:17:01 7 1.27 0.00 0.64 8.27 0.03 89.79 23:17:34 Average: all 11.57 0.00 2.00 1.84 0.05 84.53 23:17:34 Average: 0 10.10 0.00 2.08 0.36 0.04 87.41 23:17:34 Average: 1 12.98 0.00 2.09 0.62 0.06 84.25 23:17:34 Average: 2 9.19 0.00 1.78 0.89 0.05 88.09 23:17:34 Average: 3 8.57 0.00 1.86 4.08 0.06 85.43 23:17:34 Average: 4 13.03 0.00 1.94 0.60 0.04 84.38 23:17:34 Average: 5 17.87 0.00 2.18 1.23 0.06 78.66 23:17:34 Average: 6 13.56 0.00 2.37 2.79 0.05 81.24 23:17:34 Average: 7 7.23 0.00 1.74 4.18 0.04 86.81 23:17:34 23:17:34 23:17:34