23:10:58 Started by timer 23:10:58 Running as SYSTEM 23:10:58 [EnvInject] - Loading node environment variables. 23:10:58 Building remotely on prd-ubuntu1804-docker-8c-8g-12398 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 23:10:58 [ssh-agent] Looking for ssh-agent implementation... 23:10:58 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 23:10:58 $ ssh-agent 23:10:58 SSH_AUTH_SOCK=/tmp/ssh-XUweo7XTkpVW/agent.2075 23:10:58 SSH_AGENT_PID=2077 23:10:58 [ssh-agent] Started. 23:10:58 Running ssh-add (command line suppressed) 23:10:58 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_7400233989166248468.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_7400233989166248468.key) 23:10:58 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 23:10:58 The recommended git tool is: NONE 23:11:00 using credential onap-jenkins-ssh 23:11:00 Wiping out workspace first. 23:11:00 Cloning the remote Git repository 23:11:00 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 23:11:00 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 23:11:00 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 23:11:00 > git --version # timeout=10 23:11:00 > git --version # 'git version 2.17.1' 23:11:00 using GIT_SSH to set credentials Gerrit user 23:11:00 Verifying host key using manually-configured host key entries 23:11:00 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 23:11:01 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 23:11:01 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 23:11:01 Avoid second fetch 23:11:01 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 23:11:01 Checking out Revision 5582cd406c8414919c4d5d7f5b116f4f1e5a971d (refs/remotes/origin/master) 23:11:01 > git config core.sparsecheckout # timeout=10 23:11:01 > git checkout -f 5582cd406c8414919c4d5d7f5b116f4f1e5a971d # timeout=30 23:11:01 Commit message: "Merge "Add ACM regression test suite"" 23:11:01 > git rev-list --no-walk 5582cd406c8414919c4d5d7f5b116f4f1e5a971d # timeout=10 23:11:01 provisioning config files... 23:11:01 copy managed file [npmrc] to file:/home/jenkins/.npmrc 23:11:01 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 23:11:01 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7770874614456899456.sh 23:11:01 ---> python-tools-install.sh 23:11:01 Setup pyenv: 23:11:02 * system (set by /opt/pyenv/version) 23:11:02 * 3.8.13 (set by /opt/pyenv/version) 23:11:02 * 3.9.13 (set by /opt/pyenv/version) 23:11:02 * 3.10.6 (set by /opt/pyenv/version) 23:11:06 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-UIf3 23:11:06 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 23:11:09 lf-activate-venv(): INFO: Installing: lftools 23:11:44 lf-activate-venv(): INFO: Adding /tmp/venv-UIf3/bin to PATH 23:11:44 Generating Requirements File 23:12:13 Python 3.10.6 23:12:13 pip 24.0 from /tmp/venv-UIf3/lib/python3.10/site-packages/pip (python 3.10) 23:12:13 appdirs==1.4.4 23:12:13 argcomplete==3.2.3 23:12:13 aspy.yaml==1.3.0 23:12:13 attrs==23.2.0 23:12:13 autopage==0.5.2 23:12:13 beautifulsoup4==4.12.3 23:12:13 boto3==1.34.59 23:12:13 botocore==1.34.59 23:12:13 bs4==0.0.2 23:12:13 cachetools==5.3.3 23:12:13 certifi==2024.2.2 23:12:13 cffi==1.16.0 23:12:13 cfgv==3.4.0 23:12:13 chardet==5.2.0 23:12:13 charset-normalizer==3.3.2 23:12:13 click==8.1.7 23:12:13 cliff==4.6.0 23:12:13 cmd2==2.4.3 23:12:13 cryptography==3.3.2 23:12:13 debtcollector==3.0.0 23:12:13 decorator==5.1.1 23:12:13 defusedxml==0.7.1 23:12:13 Deprecated==1.2.14 23:12:13 distlib==0.3.8 23:12:13 dnspython==2.6.1 23:12:13 docker==4.2.2 23:12:13 dogpile.cache==1.3.2 23:12:13 email_validator==2.1.1 23:12:13 filelock==3.13.1 23:12:13 future==1.0.0 23:12:13 gitdb==4.0.11 23:12:13 GitPython==3.1.42 23:12:13 google-auth==2.28.2 23:12:13 httplib2==0.22.0 23:12:13 identify==2.5.35 23:12:13 idna==3.6 23:12:13 importlib-resources==1.5.0 23:12:13 iso8601==2.1.0 23:12:13 Jinja2==3.1.3 23:12:13 jmespath==1.0.1 23:12:13 jsonpatch==1.33 23:12:13 jsonpointer==2.4 23:12:13 jsonschema==4.21.1 23:12:13 jsonschema-specifications==2023.12.1 23:12:13 keystoneauth1==5.6.0 23:12:13 kubernetes==29.0.0 23:12:13 lftools==0.37.9 23:12:13 lxml==5.1.0 23:12:13 MarkupSafe==2.1.5 23:12:13 msgpack==1.0.8 23:12:13 multi_key_dict==2.0.3 23:12:13 netaddr==1.2.1 23:12:13 netifaces==0.11.0 23:12:13 niet==1.4.2 23:12:13 nodeenv==1.8.0 23:12:13 oauth2client==4.1.3 23:12:13 oauthlib==3.2.2 23:12:13 openstacksdk==3.0.0 23:12:13 os-client-config==2.1.0 23:12:13 os-service-types==1.7.0 23:12:13 osc-lib==3.0.1 23:12:13 oslo.config==9.4.0 23:12:13 oslo.context==5.5.0 23:12:13 oslo.i18n==6.3.0 23:12:13 oslo.log==5.5.0 23:12:13 oslo.serialization==5.4.0 23:12:13 oslo.utils==7.1.0 23:12:13 packaging==24.0 23:12:13 pbr==6.0.0 23:12:13 platformdirs==4.2.0 23:12:13 prettytable==3.10.0 23:12:13 pyasn1==0.5.1 23:12:13 pyasn1-modules==0.3.0 23:12:13 pycparser==2.21 23:12:13 pygerrit2==2.0.15 23:12:13 PyGithub==2.2.0 23:12:13 pyinotify==0.9.6 23:12:13 PyJWT==2.8.0 23:12:13 PyNaCl==1.5.0 23:12:13 pyparsing==2.4.7 23:12:13 pyperclip==1.8.2 23:12:13 pyrsistent==0.20.0 23:12:13 python-cinderclient==9.5.0 23:12:13 python-dateutil==2.9.0.post0 23:12:13 python-heatclient==3.5.0 23:12:13 python-jenkins==1.8.2 23:12:13 python-keystoneclient==5.4.0 23:12:13 python-magnumclient==4.4.0 23:12:13 python-novaclient==18.5.0 23:12:13 python-openstackclient==6.5.0 23:12:13 python-swiftclient==4.5.0 23:12:13 PyYAML==6.0.1 23:12:13 referencing==0.33.0 23:12:13 requests==2.31.0 23:12:13 requests-oauthlib==1.4.0 23:12:13 requestsexceptions==1.4.0 23:12:13 rfc3986==2.0.0 23:12:13 rpds-py==0.18.0 23:12:13 rsa==4.9 23:12:13 ruamel.yaml==0.18.6 23:12:13 ruamel.yaml.clib==0.2.8 23:12:13 s3transfer==0.10.0 23:12:13 simplejson==3.19.2 23:12:13 six==1.16.0 23:12:13 smmap==5.0.1 23:12:13 soupsieve==2.5 23:12:13 stevedore==5.2.0 23:12:13 tabulate==0.9.0 23:12:13 toml==0.10.2 23:12:13 tomlkit==0.12.4 23:12:13 tqdm==4.66.2 23:12:13 typing_extensions==4.10.0 23:12:13 tzdata==2024.1 23:12:13 urllib3==1.26.18 23:12:13 virtualenv==20.25.1 23:12:13 wcwidth==0.2.13 23:12:13 websocket-client==1.7.0 23:12:13 wrapt==1.16.0 23:12:13 xdg==6.0.0 23:12:13 xmltodict==0.13.0 23:12:13 yq==3.2.3 23:12:14 [EnvInject] - Injecting environment variables from a build step. 23:12:14 [EnvInject] - Injecting as environment variables the properties content 23:12:14 SET_JDK_VERSION=openjdk17 23:12:14 GIT_URL="git://cloud.onap.org/mirror" 23:12:14 23:12:14 [EnvInject] - Variables injected successfully. 23:12:14 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins5547197736415807373.sh 23:12:14 ---> update-java-alternatives.sh 23:12:14 ---> Updating Java version 23:12:14 ---> Ubuntu/Debian system detected 23:12:14 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 23:12:14 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 23:12:14 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 23:12:14 openjdk version "17.0.4" 2022-07-19 23:12:14 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 23:12:14 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 23:12:14 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 23:12:14 [EnvInject] - Injecting environment variables from a build step. 23:12:14 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 23:12:14 [EnvInject] - Variables injected successfully. 23:12:14 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins322580600918453963.sh 23:12:14 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 23:12:14 + set +u 23:12:14 + save_set 23:12:14 + RUN_CSIT_SAVE_SET=ehxB 23:12:14 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 23:12:14 + '[' 1 -eq 0 ']' 23:12:14 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:14 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:14 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:14 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:14 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:14 + export ROBOT_VARIABLES= 23:12:14 + ROBOT_VARIABLES= 23:12:14 + export PROJECT=pap 23:12:14 + PROJECT=pap 23:12:14 + cd /w/workspace/policy-pap-master-project-csit-pap 23:12:14 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:14 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:14 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:14 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 23:12:14 + relax_set 23:12:14 + set +e 23:12:14 + set +o pipefail 23:12:14 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:14 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:14 +++ mktemp -d 23:12:14 ++ ROBOT_VENV=/tmp/tmp.osDdee2OdK 23:12:14 ++ echo ROBOT_VENV=/tmp/tmp.osDdee2OdK 23:12:14 +++ python3 --version 23:12:14 ++ echo 'Python version is: Python 3.6.9' 23:12:14 Python version is: Python 3.6.9 23:12:14 ++ python3 -m venv --clear /tmp/tmp.osDdee2OdK 23:12:16 ++ source /tmp/tmp.osDdee2OdK/bin/activate 23:12:16 +++ deactivate nondestructive 23:12:16 +++ '[' -n '' ']' 23:12:16 +++ '[' -n '' ']' 23:12:16 +++ '[' -n /bin/bash -o -n '' ']' 23:12:16 +++ hash -r 23:12:16 +++ '[' -n '' ']' 23:12:16 +++ unset VIRTUAL_ENV 23:12:16 +++ '[' '!' nondestructive = nondestructive ']' 23:12:16 +++ VIRTUAL_ENV=/tmp/tmp.osDdee2OdK 23:12:16 +++ export VIRTUAL_ENV 23:12:16 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:16 +++ PATH=/tmp/tmp.osDdee2OdK/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:16 +++ export PATH 23:12:16 +++ '[' -n '' ']' 23:12:16 +++ '[' -z '' ']' 23:12:16 +++ _OLD_VIRTUAL_PS1= 23:12:16 +++ '[' 'x(tmp.osDdee2OdK) ' '!=' x ']' 23:12:16 +++ PS1='(tmp.osDdee2OdK) ' 23:12:16 +++ export PS1 23:12:16 +++ '[' -n /bin/bash -o -n '' ']' 23:12:16 +++ hash -r 23:12:16 ++ set -exu 23:12:16 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 23:12:19 ++ echo 'Installing Python Requirements' 23:12:19 Installing Python Requirements 23:12:19 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 23:12:39 ++ python3 -m pip -qq freeze 23:12:39 bcrypt==4.0.1 23:12:39 beautifulsoup4==4.12.3 23:12:39 bitarray==2.9.2 23:12:39 certifi==2024.2.2 23:12:39 cffi==1.15.1 23:12:39 charset-normalizer==2.0.12 23:12:39 cryptography==40.0.2 23:12:39 decorator==5.1.1 23:12:39 elasticsearch==7.17.9 23:12:39 elasticsearch-dsl==7.4.1 23:12:39 enum34==1.1.10 23:12:39 idna==3.6 23:12:39 importlib-resources==5.4.0 23:12:39 ipaddr==2.2.0 23:12:39 isodate==0.6.1 23:12:39 jmespath==0.10.0 23:12:39 jsonpatch==1.32 23:12:39 jsonpath-rw==1.4.0 23:12:39 jsonpointer==2.3 23:12:39 lxml==5.1.0 23:12:39 netaddr==0.8.0 23:12:39 netifaces==0.11.0 23:12:39 odltools==0.1.28 23:12:39 paramiko==3.4.0 23:12:39 pkg_resources==0.0.0 23:12:39 ply==3.11 23:12:39 pyang==2.6.0 23:12:39 pyangbind==0.8.1 23:12:39 pycparser==2.21 23:12:39 pyhocon==0.3.60 23:12:39 PyNaCl==1.5.0 23:12:39 pyparsing==3.1.2 23:12:39 python-dateutil==2.9.0.post0 23:12:39 regex==2023.8.8 23:12:39 requests==2.27.1 23:12:39 robotframework==6.1.1 23:12:39 robotframework-httplibrary==0.4.2 23:12:39 robotframework-pythonlibcore==3.0.0 23:12:39 robotframework-requests==0.9.4 23:12:39 robotframework-selenium2library==3.0.0 23:12:39 robotframework-seleniumlibrary==5.1.3 23:12:39 robotframework-sshlibrary==3.8.0 23:12:39 scapy==2.5.0 23:12:39 scp==0.14.5 23:12:39 selenium==3.141.0 23:12:39 six==1.16.0 23:12:39 soupsieve==2.3.2.post1 23:12:39 urllib3==1.26.18 23:12:39 waitress==2.0.0 23:12:39 WebOb==1.8.7 23:12:39 WebTest==3.0.0 23:12:39 zipp==3.6.0 23:12:39 ++ mkdir -p /tmp/tmp.osDdee2OdK/src/onap 23:12:39 ++ rm -rf /tmp/tmp.osDdee2OdK/src/onap/testsuite 23:12:39 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 23:12:45 ++ echo 'Installing python confluent-kafka library' 23:12:45 Installing python confluent-kafka library 23:12:45 ++ python3 -m pip install -qq confluent-kafka 23:12:46 ++ echo 'Uninstall docker-py and reinstall docker.' 23:12:46 Uninstall docker-py and reinstall docker. 23:12:46 ++ python3 -m pip uninstall -y -qq docker 23:12:47 ++ python3 -m pip install -U -qq docker 23:12:48 ++ python3 -m pip -qq freeze 23:12:48 bcrypt==4.0.1 23:12:48 beautifulsoup4==4.12.3 23:12:48 bitarray==2.9.2 23:12:48 certifi==2024.2.2 23:12:48 cffi==1.15.1 23:12:48 charset-normalizer==2.0.12 23:12:48 confluent-kafka==2.3.0 23:12:48 cryptography==40.0.2 23:12:48 decorator==5.1.1 23:12:48 deepdiff==5.7.0 23:12:48 dnspython==2.2.1 23:12:48 docker==5.0.3 23:12:48 elasticsearch==7.17.9 23:12:48 elasticsearch-dsl==7.4.1 23:12:48 enum34==1.1.10 23:12:48 future==1.0.0 23:12:48 idna==3.6 23:12:48 importlib-resources==5.4.0 23:12:48 ipaddr==2.2.0 23:12:48 isodate==0.6.1 23:12:48 Jinja2==3.0.3 23:12:48 jmespath==0.10.0 23:12:48 jsonpatch==1.32 23:12:48 jsonpath-rw==1.4.0 23:12:48 jsonpointer==2.3 23:12:48 kafka-python==2.0.2 23:12:48 lxml==5.1.0 23:12:48 MarkupSafe==2.0.1 23:12:48 more-itertools==5.0.0 23:12:48 netaddr==0.8.0 23:12:48 netifaces==0.11.0 23:12:48 odltools==0.1.28 23:12:48 ordered-set==4.0.2 23:12:48 paramiko==3.4.0 23:12:48 pbr==6.0.0 23:12:48 pkg_resources==0.0.0 23:12:48 ply==3.11 23:12:48 protobuf==3.19.6 23:12:48 pyang==2.6.0 23:12:48 pyangbind==0.8.1 23:12:48 pycparser==2.21 23:12:48 pyhocon==0.3.60 23:12:48 PyNaCl==1.5.0 23:12:48 pyparsing==3.1.2 23:12:48 python-dateutil==2.9.0.post0 23:12:48 PyYAML==6.0.1 23:12:48 regex==2023.8.8 23:12:48 requests==2.27.1 23:12:48 robotframework==6.1.1 23:12:48 robotframework-httplibrary==0.4.2 23:12:48 robotframework-onap==0.6.0.dev105 23:12:48 robotframework-pythonlibcore==3.0.0 23:12:48 robotframework-requests==0.9.4 23:12:48 robotframework-selenium2library==3.0.0 23:12:48 robotframework-seleniumlibrary==5.1.3 23:12:48 robotframework-sshlibrary==3.8.0 23:12:48 robotlibcore-temp==1.0.2 23:12:48 scapy==2.5.0 23:12:48 scp==0.14.5 23:12:48 selenium==3.141.0 23:12:48 six==1.16.0 23:12:48 soupsieve==2.3.2.post1 23:12:48 urllib3==1.26.18 23:12:48 waitress==2.0.0 23:12:48 WebOb==1.8.7 23:12:48 websocket-client==1.3.1 23:12:48 WebTest==3.0.0 23:12:48 zipp==3.6.0 23:12:48 ++ uname 23:12:48 ++ grep -q Linux 23:12:48 ++ sudo apt-get -y -qq install libxml2-utils 23:12:49 + load_set 23:12:49 + _setopts=ehuxB 23:12:49 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 23:12:49 ++ tr : ' ' 23:12:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:49 + set +o braceexpand 23:12:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:49 + set +o hashall 23:12:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:49 + set +o interactive-comments 23:12:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:49 + set +o nounset 23:12:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:49 + set +o xtrace 23:12:49 ++ echo ehuxB 23:12:49 ++ sed 's/./& /g' 23:12:49 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:49 + set +e 23:12:49 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:49 + set +h 23:12:49 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:49 + set +u 23:12:49 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:49 + set +x 23:12:49 + source_safely /tmp/tmp.osDdee2OdK/bin/activate 23:12:49 + '[' -z /tmp/tmp.osDdee2OdK/bin/activate ']' 23:12:49 + relax_set 23:12:49 + set +e 23:12:49 + set +o pipefail 23:12:49 + . /tmp/tmp.osDdee2OdK/bin/activate 23:12:49 ++ deactivate nondestructive 23:12:49 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 23:12:49 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:49 ++ export PATH 23:12:49 ++ unset _OLD_VIRTUAL_PATH 23:12:49 ++ '[' -n '' ']' 23:12:49 ++ '[' -n /bin/bash -o -n '' ']' 23:12:49 ++ hash -r 23:12:49 ++ '[' -n '' ']' 23:12:49 ++ unset VIRTUAL_ENV 23:12:49 ++ '[' '!' nondestructive = nondestructive ']' 23:12:49 ++ VIRTUAL_ENV=/tmp/tmp.osDdee2OdK 23:12:49 ++ export VIRTUAL_ENV 23:12:49 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:49 ++ PATH=/tmp/tmp.osDdee2OdK/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:49 ++ export PATH 23:12:49 ++ '[' -n '' ']' 23:12:49 ++ '[' -z '' ']' 23:12:49 ++ _OLD_VIRTUAL_PS1='(tmp.osDdee2OdK) ' 23:12:49 ++ '[' 'x(tmp.osDdee2OdK) ' '!=' x ']' 23:12:49 ++ PS1='(tmp.osDdee2OdK) (tmp.osDdee2OdK) ' 23:12:49 ++ export PS1 23:12:49 ++ '[' -n /bin/bash -o -n '' ']' 23:12:49 ++ hash -r 23:12:49 + load_set 23:12:49 + _setopts=hxB 23:12:49 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:12:49 ++ tr : ' ' 23:12:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:49 + set +o braceexpand 23:12:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:49 + set +o hashall 23:12:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:49 + set +o interactive-comments 23:12:49 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:49 + set +o xtrace 23:12:49 ++ echo hxB 23:12:49 ++ sed 's/./& /g' 23:12:49 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:49 + set +h 23:12:49 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:49 + set +x 23:12:49 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:49 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:49 + export TEST_OPTIONS= 23:12:49 + TEST_OPTIONS= 23:12:49 ++ mktemp -d 23:12:49 + WORKDIR=/tmp/tmp.puIbOQCAlI 23:12:49 + cd /tmp/tmp.puIbOQCAlI 23:12:49 + docker login -u docker -p docker nexus3.onap.org:10001 23:12:49 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 23:12:49 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 23:12:49 Configure a credential helper to remove this warning. See 23:12:49 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 23:12:49 23:12:49 Login Succeeded 23:12:49 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:49 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:49 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 23:12:49 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:49 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:49 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:49 + relax_set 23:12:49 + set +e 23:12:49 + set +o pipefail 23:12:49 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:49 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 23:12:49 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:49 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 23:12:49 +++ GERRIT_BRANCH=master 23:12:49 +++ echo GERRIT_BRANCH=master 23:12:49 GERRIT_BRANCH=master 23:12:49 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:12:49 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 23:12:49 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 23:12:49 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 23:12:50 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:50 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:50 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:50 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:50 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:50 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:50 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 23:12:50 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:50 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:12:50 +++ grafana=false 23:12:50 +++ gui=false 23:12:50 +++ [[ 2 -gt 0 ]] 23:12:50 +++ key=apex-pdp 23:12:50 +++ case $key in 23:12:50 +++ echo apex-pdp 23:12:50 apex-pdp 23:12:50 +++ component=apex-pdp 23:12:50 +++ shift 23:12:50 +++ [[ 1 -gt 0 ]] 23:12:50 +++ key=--grafana 23:12:50 +++ case $key in 23:12:50 +++ grafana=true 23:12:50 +++ shift 23:12:50 +++ [[ 0 -gt 0 ]] 23:12:50 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:12:50 +++ echo 'Configuring docker compose...' 23:12:50 Configuring docker compose... 23:12:50 +++ source export-ports.sh 23:12:50 +++ source get-versions.sh 23:12:52 +++ '[' -z pap ']' 23:12:52 +++ '[' -n apex-pdp ']' 23:12:52 +++ '[' apex-pdp == logs ']' 23:12:52 +++ '[' true = true ']' 23:12:52 +++ echo 'Starting apex-pdp application with Grafana' 23:12:52 Starting apex-pdp application with Grafana 23:12:52 +++ docker-compose up -d apex-pdp grafana 23:12:53 Creating network "compose_default" with the default driver 23:12:53 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 23:12:53 latest: Pulling from prom/prometheus 23:12:56 Digest: sha256:bc1794e85c9e00293351b967efa267ce6af1c824ac875a9d0c7ac84700a8b53e 23:12:56 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 23:12:56 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 23:12:56 latest: Pulling from grafana/grafana 23:13:02 Digest: sha256:f9811e4e687ffecf1a43adb9b64096c50bc0d7a782f8608530f478b6542de7d5 23:13:02 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 23:13:02 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 23:13:02 10.10.2: Pulling from mariadb 23:13:07 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 23:13:07 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 23:13:07 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 23:13:07 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator 23:13:12 Digest: sha256:5772a5c551b30d73f901debb8dc38f305559b920e248a9ccb1dba3b880278a13 23:13:12 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT 23:13:12 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 23:13:12 latest: Pulling from confluentinc/cp-zookeeper 23:13:24 Digest: sha256:9babd1c0beaf93189982bdbb9fe4bf194a2730298b640c057817746c19838866 23:13:24 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 23:13:24 Pulling kafka (confluentinc/cp-kafka:latest)... 23:13:24 latest: Pulling from confluentinc/cp-kafka 23:13:28 Digest: sha256:24cdd3a7fa89d2bed150560ebea81ff1943badfa61e51d66bb541a6b0d7fb047 23:13:28 Status: Downloaded newer image for confluentinc/cp-kafka:latest 23:13:28 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 23:13:28 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator 23:13:39 Digest: sha256:ed573692302e5a28aa3b51a60adbd7641290e273719edd44bc9ff784d1569efa 23:13:39 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT 23:13:39 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 23:13:39 3.1.2-SNAPSHOT: Pulling from onap/policy-api 23:13:41 Digest: sha256:fdc9aa26830be0af882248f5f576f0e9466b8e17ff432e8618d01432efa85803 23:13:41 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT 23:13:41 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 23:13:41 3.1.2-SNAPSHOT: Pulling from onap/policy-pap 23:13:43 Digest: sha256:5e7bdea16830f0dd3e16df519f0efbee38922192c2a79297bcac6699fa44e067 23:13:43 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT 23:13:43 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 23:13:43 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp 23:13:53 Digest: sha256:6150a977631ab72b68f6d8aef4c9bd1e7c9ba8079ef3864510ec09056daa579d 23:13:53 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT 23:13:54 Creating prometheus ... 23:13:54 Creating mariadb ... 23:13:54 Creating simulator ... 23:13:54 Creating compose_zookeeper_1 ... 23:14:08 Creating simulator ... done 23:14:09 Creating compose_zookeeper_1 ... done 23:14:09 Creating kafka ... 23:14:10 Creating kafka ... done 23:14:11 Creating prometheus ... done 23:14:11 Creating grafana ... 23:14:12 Creating grafana ... done 23:14:13 Creating mariadb ... done 23:14:13 Creating policy-db-migrator ... 23:14:14 Creating policy-db-migrator ... done 23:14:14 Creating policy-api ... 23:14:15 Creating policy-api ... done 23:14:15 Creating policy-pap ... 23:14:16 Creating policy-pap ... done 23:14:16 Creating policy-apex-pdp ... 23:14:17 Creating policy-apex-pdp ... done 23:14:17 +++ echo 'Prometheus server: http://localhost:30259' 23:14:17 Prometheus server: http://localhost:30259 23:14:17 +++ echo 'Grafana server: http://localhost:30269' 23:14:17 Grafana server: http://localhost:30269 23:14:17 +++ cd /w/workspace/policy-pap-master-project-csit-pap 23:14:17 ++ sleep 10 23:14:27 ++ unset http_proxy https_proxy 23:14:27 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 23:14:27 Waiting for REST to come up on localhost port 30003... 23:14:27 NAMES STATUS 23:14:27 policy-apex-pdp Up 10 seconds 23:14:27 policy-pap Up 11 seconds 23:14:27 policy-api Up 12 seconds 23:14:27 policy-db-migrator Up 13 seconds 23:14:27 grafana Up 15 seconds 23:14:27 kafka Up 17 seconds 23:14:27 compose_zookeeper_1 Up 18 seconds 23:14:27 mariadb Up 14 seconds 23:14:27 prometheus Up 16 seconds 23:14:27 simulator Up 19 seconds 23:14:32 NAMES STATUS 23:14:32 policy-apex-pdp Up 15 seconds 23:14:32 policy-pap Up 16 seconds 23:14:32 policy-api Up 17 seconds 23:14:32 grafana Up 20 seconds 23:14:32 kafka Up 22 seconds 23:14:32 compose_zookeeper_1 Up 23 seconds 23:14:32 mariadb Up 19 seconds 23:14:32 prometheus Up 21 seconds 23:14:32 simulator Up 24 seconds 23:14:37 NAMES STATUS 23:14:37 policy-apex-pdp Up 20 seconds 23:14:37 policy-pap Up 21 seconds 23:14:37 policy-api Up 22 seconds 23:14:37 grafana Up 25 seconds 23:14:37 kafka Up 27 seconds 23:14:37 compose_zookeeper_1 Up 28 seconds 23:14:37 mariadb Up 24 seconds 23:14:37 prometheus Up 26 seconds 23:14:37 simulator Up 29 seconds 23:14:42 NAMES STATUS 23:14:42 policy-apex-pdp Up 25 seconds 23:14:42 policy-pap Up 26 seconds 23:14:42 policy-api Up 27 seconds 23:14:42 grafana Up 30 seconds 23:14:42 kafka Up 32 seconds 23:14:42 compose_zookeeper_1 Up 33 seconds 23:14:42 mariadb Up 29 seconds 23:14:42 prometheus Up 31 seconds 23:14:42 simulator Up 34 seconds 23:14:47 NAMES STATUS 23:14:47 policy-apex-pdp Up 30 seconds 23:14:47 policy-pap Up 31 seconds 23:14:47 policy-api Up 32 seconds 23:14:47 grafana Up 35 seconds 23:14:47 kafka Up 37 seconds 23:14:47 compose_zookeeper_1 Up 38 seconds 23:14:47 mariadb Up 34 seconds 23:14:47 prometheus Up 36 seconds 23:14:47 simulator Up 39 seconds 23:14:53 NAMES STATUS 23:14:53 policy-apex-pdp Up 35 seconds 23:14:53 policy-pap Up 36 seconds 23:14:53 policy-api Up 37 seconds 23:14:53 grafana Up 40 seconds 23:14:53 kafka Up 42 seconds 23:14:53 compose_zookeeper_1 Up 43 seconds 23:14:53 mariadb Up 39 seconds 23:14:53 prometheus Up 41 seconds 23:14:53 simulator Up 44 seconds 23:14:53 ++ export 'SUITES=pap-test.robot 23:14:53 pap-slas.robot' 23:14:53 ++ SUITES='pap-test.robot 23:14:53 pap-slas.robot' 23:14:53 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:53 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:14:53 + load_set 23:14:53 + _setopts=hxB 23:14:53 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:14:53 ++ tr : ' ' 23:14:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:53 + set +o braceexpand 23:14:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:53 + set +o hashall 23:14:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:53 + set +o interactive-comments 23:14:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:53 + set +o xtrace 23:14:53 ++ echo hxB 23:14:53 ++ sed 's/./& /g' 23:14:53 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:14:53 + set +h 23:14:53 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:14:53 + set +x 23:14:53 + docker_stats 23:14:53 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 23:14:53 ++ uname -s 23:14:53 + '[' Linux == Darwin ']' 23:14:53 + sh -c 'top -bn1 | head -3' 23:14:53 top - 23:14:53 up 4 min, 0 users, load average: 3.69, 1.83, 0.76 23:14:53 Tasks: 207 total, 1 running, 129 sleeping, 0 stopped, 0 zombie 23:14:53 %Cpu(s): 13.2 us, 3.0 sy, 0.0 ni, 79.5 id, 4.2 wa, 0.0 hi, 0.1 si, 0.1 st 23:14:53 + echo 23:14:53 + sh -c 'free -h' 23:14:53 23:14:53 total used free shared buff/cache available 23:14:53 Mem: 31G 2.6G 22G 1.3M 6.2G 28G 23:14:53 Swap: 1.0G 0B 1.0G 23:14:53 + echo 23:14:53 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:14:53 23:14:53 NAMES STATUS 23:14:53 policy-apex-pdp Up 35 seconds 23:14:53 policy-pap Up 36 seconds 23:14:53 policy-api Up 37 seconds 23:14:53 grafana Up 41 seconds 23:14:53 kafka Up 43 seconds 23:14:53 compose_zookeeper_1 Up 44 seconds 23:14:53 mariadb Up 39 seconds 23:14:53 prometheus Up 42 seconds 23:14:53 simulator Up 45 seconds 23:14:53 + echo 23:14:53 + docker stats --no-stream 23:14:53 23:14:56 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:14:56 05dc42deb9a1 policy-apex-pdp 265.45% 113.9MiB / 31.41GiB 0.35% 3.9kB / 3.8kB 0B / 0B 35 23:14:56 75aa64d66874 policy-pap 1.66% 626.3MiB / 31.41GiB 1.95% 29.4kB / 30.9kB 0B / 153MB 64 23:14:56 541696ee1234 policy-api 0.11% 535.2MiB / 31.41GiB 1.66% 1MB / 738kB 0B / 0B 55 23:14:56 1879a66c2ccc grafana 0.04% 62.09MiB / 31.41GiB 0.19% 18.9kB / 3.69kB 0B / 24.7MB 18 23:14:56 d3fc987b292f kafka 30.11% 370.9MiB / 31.41GiB 1.15% 65.5kB / 68.7kB 0B / 475kB 84 23:14:56 481cd90cd001 compose_zookeeper_1 0.12% 99.94MiB / 31.41GiB 0.31% 52.2kB / 46kB 0B / 381kB 60 23:14:56 103d59592365 mariadb 0.02% 102.2MiB / 31.41GiB 0.32% 997kB / 1.19MB 11MB / 68MB 38 23:14:56 2e2353988bd3 prometheus 0.00% 19.19MiB / 31.41GiB 0.06% 28.4kB / 1.16kB 0B / 0B 13 23:14:56 1c8841bad4f4 simulator 0.07% 122.4MiB / 31.41GiB 0.38% 1.67kB / 0B 229kB / 0B 76 23:14:56 + echo 23:14:56 23:14:56 + cd /tmp/tmp.puIbOQCAlI 23:14:56 + echo 'Reading the testplan:' 23:14:56 Reading the testplan: 23:14:56 + echo 'pap-test.robot 23:14:56 pap-slas.robot' 23:14:56 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 23:14:56 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 23:14:56 + cat testplan.txt 23:14:56 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 23:14:56 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:14:56 ++ xargs 23:14:56 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 23:14:56 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:56 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:14:56 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:56 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:14:56 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 23:14:56 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 23:14:56 + relax_set 23:14:56 + set +e 23:14:56 + set +o pipefail 23:14:56 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:14:56 ============================================================================== 23:14:56 pap 23:14:56 ============================================================================== 23:14:56 pap.Pap-Test 23:14:56 ============================================================================== 23:14:57 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 23:14:57 ------------------------------------------------------------------------------ 23:14:57 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 23:14:57 ------------------------------------------------------------------------------ 23:14:58 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 23:14:58 ------------------------------------------------------------------------------ 23:14:58 Healthcheck :: Verify policy pap health check | PASS | 23:14:58 ------------------------------------------------------------------------------ 23:15:18 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 23:15:18 ------------------------------------------------------------------------------ 23:15:19 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 23:15:19 ------------------------------------------------------------------------------ 23:15:19 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 23:15:19 ------------------------------------------------------------------------------ 23:15:20 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 23:15:20 ------------------------------------------------------------------------------ 23:15:20 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 23:15:20 ------------------------------------------------------------------------------ 23:15:20 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 23:15:20 ------------------------------------------------------------------------------ 23:15:20 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 23:15:20 ------------------------------------------------------------------------------ 23:15:20 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 23:15:20 ------------------------------------------------------------------------------ 23:15:21 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 23:15:21 ------------------------------------------------------------------------------ 23:15:21 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 23:15:21 ------------------------------------------------------------------------------ 23:15:21 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 23:15:21 ------------------------------------------------------------------------------ 23:15:21 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 23:15:21 ------------------------------------------------------------------------------ 23:15:22 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 23:15:22 ------------------------------------------------------------------------------ 23:15:42 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 23:15:42 ------------------------------------------------------------------------------ 23:15:42 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 23:15:42 ------------------------------------------------------------------------------ 23:15:42 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 23:15:42 ------------------------------------------------------------------------------ 23:15:42 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 23:15:42 ------------------------------------------------------------------------------ 23:15:42 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 23:15:42 ------------------------------------------------------------------------------ 23:15:42 pap.Pap-Test | PASS | 23:15:42 22 tests, 22 passed, 0 failed 23:15:42 ============================================================================== 23:15:42 pap.Pap-Slas 23:15:42 ============================================================================== 23:16:42 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 23:16:42 ------------------------------------------------------------------------------ 23:16:43 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 23:16:43 ------------------------------------------------------------------------------ 23:16:43 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 23:16:43 ------------------------------------------------------------------------------ 23:16:43 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 23:16:43 ------------------------------------------------------------------------------ 23:16:43 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 23:16:43 ------------------------------------------------------------------------------ 23:16:43 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 23:16:43 ------------------------------------------------------------------------------ 23:16:43 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 23:16:43 ------------------------------------------------------------------------------ 23:16:43 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 23:16:43 ------------------------------------------------------------------------------ 23:16:43 pap.Pap-Slas | PASS | 23:16:43 8 tests, 8 passed, 0 failed 23:16:43 ============================================================================== 23:16:43 pap | PASS | 23:16:43 30 tests, 30 passed, 0 failed 23:16:43 ============================================================================== 23:16:43 Output: /tmp/tmp.puIbOQCAlI/output.xml 23:16:43 Log: /tmp/tmp.puIbOQCAlI/log.html 23:16:43 Report: /tmp/tmp.puIbOQCAlI/report.html 23:16:43 + RESULT=0 23:16:43 + load_set 23:16:43 + _setopts=hxB 23:16:43 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:16:43 ++ tr : ' ' 23:16:43 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:43 + set +o braceexpand 23:16:43 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:43 + set +o hashall 23:16:43 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:43 + set +o interactive-comments 23:16:43 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:43 + set +o xtrace 23:16:43 ++ echo hxB 23:16:43 ++ sed 's/./& /g' 23:16:43 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:43 + set +h 23:16:43 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:43 + set +x 23:16:43 + echo 'RESULT: 0' 23:16:43 RESULT: 0 23:16:43 + exit 0 23:16:43 + on_exit 23:16:43 + rc=0 23:16:43 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 23:16:43 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:43 NAMES STATUS 23:16:43 policy-apex-pdp Up 2 minutes 23:16:43 policy-pap Up 2 minutes 23:16:43 policy-api Up 2 minutes 23:16:43 grafana Up 2 minutes 23:16:43 kafka Up 2 minutes 23:16:43 compose_zookeeper_1 Up 2 minutes 23:16:43 mariadb Up 2 minutes 23:16:43 prometheus Up 2 minutes 23:16:43 simulator Up 2 minutes 23:16:43 + docker_stats 23:16:43 ++ uname -s 23:16:43 + '[' Linux == Darwin ']' 23:16:43 + sh -c 'top -bn1 | head -3' 23:16:43 top - 23:16:43 up 6 min, 0 users, load average: 0.88, 1.43, 0.74 23:16:43 Tasks: 198 total, 1 running, 129 sleeping, 0 stopped, 0 zombie 23:16:43 %Cpu(s): 11.0 us, 2.3 sy, 0.0 ni, 83.3 id, 3.3 wa, 0.0 hi, 0.1 si, 0.1 st 23:16:43 + echo 23:16:43 23:16:43 + sh -c 'free -h' 23:16:43 total used free shared buff/cache available 23:16:43 Mem: 31G 2.8G 22G 1.3M 6.2G 28G 23:16:43 Swap: 1.0G 0B 1.0G 23:16:43 + echo 23:16:43 23:16:43 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:43 NAMES STATUS 23:16:43 policy-apex-pdp Up 2 minutes 23:16:43 policy-pap Up 2 minutes 23:16:43 policy-api Up 2 minutes 23:16:43 grafana Up 2 minutes 23:16:43 kafka Up 2 minutes 23:16:43 compose_zookeeper_1 Up 2 minutes 23:16:43 mariadb Up 2 minutes 23:16:43 prometheus Up 2 minutes 23:16:43 simulator Up 2 minutes 23:16:43 + echo 23:16:43 23:16:43 + docker stats --no-stream 23:16:46 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:16:46 05dc42deb9a1 policy-apex-pdp 1.36% 186.8MiB / 31.41GiB 0.58% 56kB / 90.1kB 0B / 0B 52 23:16:46 75aa64d66874 policy-pap 0.68% 471MiB / 31.41GiB 1.46% 2.33MB / 815kB 0B / 153MB 68 23:16:46 541696ee1234 policy-api 0.09% 606.6MiB / 31.41GiB 1.89% 2.49MB / 1.29MB 0B / 0B 58 23:16:46 1879a66c2ccc grafana 0.04% 56.66MiB / 31.41GiB 0.18% 21.8kB / 4.87kB 0B / 24.7MB 18 23:16:46 d3fc987b292f kafka 1.17% 393.6MiB / 31.41GiB 1.22% 232kB / 209kB 0B / 573kB 85 23:16:46 481cd90cd001 compose_zookeeper_1 0.08% 99.98MiB / 31.41GiB 0.31% 55.1kB / 47.6kB 0B / 381kB 60 23:16:46 103d59592365 mariadb 0.01% 103.6MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11MB / 68.3MB 28 23:16:46 2e2353988bd3 prometheus 0.00% 25.73MiB / 31.41GiB 0.08% 219kB / 11.9kB 0B / 0B 13 23:16:46 1c8841bad4f4 simulator 0.06% 122.5MiB / 31.41GiB 0.38% 1.94kB / 0B 229kB / 0B 78 23:16:46 + echo 23:16:46 23:16:46 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:46 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 23:16:46 + relax_set 23:16:46 + set +e 23:16:46 + set +o pipefail 23:16:46 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:46 ++ echo 'Shut down started!' 23:16:46 Shut down started! 23:16:46 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:16:46 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:16:46 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:16:46 ++ source export-ports.sh 23:16:46 ++ source get-versions.sh 23:16:48 ++ echo 'Collecting logs from docker compose containers...' 23:16:48 Collecting logs from docker compose containers... 23:16:48 ++ docker-compose logs 23:16:49 ++ cat docker_compose.log 23:16:49 Attaching to policy-apex-pdp, policy-pap, policy-api, policy-db-migrator, grafana, kafka, compose_zookeeper_1, mariadb, prometheus, simulator 23:16:49 zookeeper_1 | ===> User 23:16:49 zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:49 zookeeper_1 | ===> Configuring ... 23:16:49 zookeeper_1 | ===> Running preflight checks ... 23:16:49 zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... 23:16:49 zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... 23:16:49 zookeeper_1 | ===> Launching ... 23:16:49 zookeeper_1 | ===> Launching zookeeper ... 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,008] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,016] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,016] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,016] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,016] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,018] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,018] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,018] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,018] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,019] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,019] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,020] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,020] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,020] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,020] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,020] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,032] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@26275bef (org.apache.zookeeper.server.ServerMetrics) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,035] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,035] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,037] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,047] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,047] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,047] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,047] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,047] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,047] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,047] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,047] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,047] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,047] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,049] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,049] INFO Server environment:host.name=481cd90cd001 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,049] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,049] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,049] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,049] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,049] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,049] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,049] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,049] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,049] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,049] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,049] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,049] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,049] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,049] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,049] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,049] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,049] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,049] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,049] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,049] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,049] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,050] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,050] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,050] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,052] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,052] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,053] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,053] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,054] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,054] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,054] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,054] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,054] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,054] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,056] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,056] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,057] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,057] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,057] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,079] INFO Logging initialized @566ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,172] WARN o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,172] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,194] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,227] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,227] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,229] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,232] WARN ServletContext@o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,240] INFO Started o.e.j.s.ServletContextHandler@5be1d0a4{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,259] INFO Started ServerConnector@4f32a3ad{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,260] INFO Started @747ms (org.eclipse.jetty.server.Server) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,260] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,268] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,269] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,271] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,272] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,313] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,313] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,314] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,314] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,319] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,319] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,322] INFO Snapshot loaded in 8 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,323] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,324] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,333] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,334] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,348] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 23:16:49 zookeeper_1 | [2024-03-10 23:14:13,349] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 23:16:49 zookeeper_1 | [2024-03-10 23:14:14,952] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 23:16:49 grafana | logger=settings t=2024-03-10T23:14:12.508303337Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2024-03-10T23:14:12Z 23:16:49 grafana | logger=settings t=2024-03-10T23:14:12.508637735Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 23:16:49 grafana | logger=settings t=2024-03-10T23:14:12.508655405Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 23:16:49 grafana | logger=settings t=2024-03-10T23:14:12.508659475Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 23:16:49 grafana | logger=settings t=2024-03-10T23:14:12.508666795Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 23:16:49 grafana | logger=settings t=2024-03-10T23:14:12.508670575Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 23:16:49 grafana | logger=settings t=2024-03-10T23:14:12.508673585Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 23:16:49 grafana | logger=settings t=2024-03-10T23:14:12.508677426Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 23:16:49 grafana | logger=settings t=2024-03-10T23:14:12.508681416Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 23:16:49 grafana | logger=settings t=2024-03-10T23:14:12.508684866Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 23:16:49 grafana | logger=settings t=2024-03-10T23:14:12.508881Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 23:16:49 grafana | logger=settings t=2024-03-10T23:14:12.508901341Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 23:16:49 grafana | logger=settings t=2024-03-10T23:14:12.508905371Z level=info msg=Target target=[all] 23:16:49 grafana | logger=settings t=2024-03-10T23:14:12.508911991Z level=info msg="Path Home" path=/usr/share/grafana 23:16:49 grafana | logger=settings t=2024-03-10T23:14:12.508914601Z level=info msg="Path Data" path=/var/lib/grafana 23:16:49 grafana | logger=settings t=2024-03-10T23:14:12.508917561Z level=info msg="Path Logs" path=/var/log/grafana 23:16:49 grafana | logger=settings t=2024-03-10T23:14:12.508921251Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 23:16:49 grafana | logger=settings t=2024-03-10T23:14:12.508926741Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 23:16:49 grafana | logger=settings t=2024-03-10T23:14:12.508929991Z level=info msg="App mode production" 23:16:49 grafana | logger=sqlstore t=2024-03-10T23:14:12.50932612Z level=info msg="Connecting to DB" dbtype=sqlite3 23:16:49 grafana | logger=sqlstore t=2024-03-10T23:14:12.509357031Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.510267781Z level=info msg="Starting DB migrations" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.511309235Z level=info msg="Executing migration" id="create migration_log table" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.512256366Z level=info msg="Migration successfully executed" id="create migration_log table" duration=946.841µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.521982647Z level=info msg="Executing migration" id="create user table" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.523282147Z level=info msg="Migration successfully executed" id="create user table" duration=1.29716ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.527217797Z level=info msg="Executing migration" id="add unique index user.login" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.528485196Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.266859ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.53222211Z level=info msg="Executing migration" id="add unique index user.email" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.532985237Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=763.097µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.536314164Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.537081441Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=766.897µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.543055067Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.543828614Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=771.927µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.547408017Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.550098237Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.68596ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.553820442Z level=info msg="Executing migration" id="create user table v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.554743433Z level=info msg="Migration successfully executed" id="create user table v2" duration=923.191µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.562211053Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.56343158Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.221207ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.567136825Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.568427574Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.298709ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.571686208Z level=info msg="Executing migration" id="copy data_source v1 to v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.572208359Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=521.831µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.577624123Z level=info msg="Executing migration" id="Drop old table user_v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.578242088Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=614.935µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.581970732Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.583730762Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.75891ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.587983739Z level=info msg="Executing migration" id="Update user table charset" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.58801236Z level=info msg="Migration successfully executed" id="Update user table charset" duration=29.001µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.591488408Z level=info msg="Executing migration" id="Add last_seen_at column to user" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.592696546Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.208008ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.598468248Z level=info msg="Executing migration" id="Add missing user data" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.598807285Z level=info msg="Migration successfully executed" id="Add missing user data" duration=338.358µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.602317295Z level=info msg="Executing migration" id="Add is_disabled column to user" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.603562614Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.241229ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.60690609Z level=info msg="Executing migration" id="Add index user.login/user.email" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.607725788Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=818.788µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.610789227Z level=info msg="Executing migration" id="Add is_service_account column to user" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.612020056Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.230459ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.6843501Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.695586816Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=11.237976ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.69882461Z level=info msg="Executing migration" id="Add uid column to user" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.699673479Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=846.959µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.70279485Z level=info msg="Executing migration" id="Update uid column values for users" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.703042546Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=246.906µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.706508944Z level=info msg="Executing migration" id="Add unique index user_uid" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.707330864Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=816.46µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.713124266Z level=info msg="Executing migration" id="create temp user table v1-7" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.714516047Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.391531ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.717942254Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.719119922Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.169547ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.725045666Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.725808183Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=761.967µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.729346974Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.730583022Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.236658ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.734586593Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.735862452Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.280589ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.73971566Z level=info msg="Executing migration" id="Update temp_user table charset" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.73974293Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=31.62µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.745095962Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.745915271Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=818.339µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.749683637Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.752121623Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=2.434546ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.757127926Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.758026587Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=904.261µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.76435694Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.765359613Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.006143ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.768562206Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.771554465Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.991729ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.783481076Z level=info msg="Executing migration" id="create temp_user v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.784636691Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.155276ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.788225293Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.789120844Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=895.001µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.795221743Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.796150763Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=928.21µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.799717195Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.800545273Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=825.578µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.809446196Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.810280595Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=833.869µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.816201109Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.816534868Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=333.619µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.819637238Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.820246202Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=609.423µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.823402293Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.823740311Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=338.048µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.82942808Z level=info msg="Executing migration" id="create star table" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.829965962Z level=info msg="Migration successfully executed" id="create star table" duration=537.362µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.832613413Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.833225047Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=610.814µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.836940832Z level=info msg="Executing migration" id="create org table v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.837677748Z level=info msg="Migration successfully executed" id="create org table v1" duration=733.647µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.841332361Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.841930635Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=597.794µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.847515522Z level=info msg="Executing migration" id="create org_user table v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.848081805Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=566.063µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.851618765Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.852223499Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=605.114µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.85577946Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.856396854Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=631.844µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.860547498Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.861134082Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=587.344µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.867262531Z level=info msg="Executing migration" id="Update org table charset" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.867287162Z level=info msg="Migration successfully executed" id="Update org table charset" duration=27.221µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.870872683Z level=info msg="Executing migration" id="Update org_user table charset" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.870896954Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=28.191µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.874226439Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.874413033Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=186.574µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.877638477Z level=info msg="Executing migration" id="create dashboard table" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.878253971Z level=info msg="Migration successfully executed" id="create dashboard table" duration=615.565µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.884425541Z level=info msg="Executing migration" id="add index dashboard.account_id" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.885294251Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=867.91µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.888680348Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.889669271Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=989.123µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.893285673Z level=info msg="Executing migration" id="create dashboard_tag table" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.89405921Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=773.977µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.897735685Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.8984196Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=684.715µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.903970206Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.90459129Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=620.694µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.908080209Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.913194476Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.113477ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.91690937Z level=info msg="Executing migration" id="create dashboard v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.917785201Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=874.99µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.922981259Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.923595502Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=613.433µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.927126452Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.928092674Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=964.672µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.931941902Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.932390332Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=448.23µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.938528502Z level=info msg="Executing migration" id="drop table dashboard_v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.939405282Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=876.52µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.943164527Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.94327624Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=111.283µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.946973604Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.948864687Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.892573ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.955525578Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.957513984Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.988656ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.961407992Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.963243184Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.834952ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.967327976Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.967954711Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=626.555µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.974189773Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.975537313Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.347ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.978961182Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.979581506Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=619.454µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.984653811Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.986047112Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.392391ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.991967347Z level=info msg="Executing migration" id="Update dashboard table charset" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.992055739Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=88.832µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.995970229Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.996064101Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=93.612µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:12.999416897Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.001670068Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.256401ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.045278977Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.046870693Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.592226ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.051851016Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.053529644Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.678298ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.0582017Z level=info msg="Executing migration" id="Add column uid in dashboard" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.059573241Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.37112ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.065937345Z level=info msg="Executing migration" id="Update uid column values in dashboard" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.06657531Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=643.455µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.070424307Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.07186599Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.441373ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.075457531Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.07630177Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=847.359µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.079895052Z level=info msg="Executing migration" id="Update dashboard title length" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.079932903Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=38.961µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.087960004Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.090786698Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=2.827884ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.094573304Z level=info msg="Executing migration" id="create dashboard_provisioning" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.095353632Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=779.808µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.10277519Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.108263084Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.488834ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.11378457Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.114356543Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=571.312µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.117364341Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.118009295Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=644.224µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.121269959Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.12261674Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.373252ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.129890064Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.130494608Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=604.014µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.134435777Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.135382258Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=949.151µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.139244896Z level=info msg="Executing migration" id="Add check_sum column" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.14076582Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.520964ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.146751306Z level=info msg="Executing migration" id="Add index for dashboard_title" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.147584555Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=839.939µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.150925751Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.151170336Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=244.365µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.154294387Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.154534742Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=240.075µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.157597222Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.158462682Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=865.75µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.164227423Z level=info msg="Executing migration" id="Add isPublic for dashboard" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.16763735Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.409177ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.171316213Z level=info msg="Executing migration" id="create data_source table" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.17247611Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.159117ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.175863696Z level=info msg="Executing migration" id="add index data_source.account_id" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.176737196Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=870.28µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.18220023Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.183094479Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=892.24µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.186736853Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.188240806Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.504123ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.192055443Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.19323743Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.179006ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.199279006Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.205354774Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.074978ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.209419967Z level=info msg="Executing migration" id="create data_source table v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.210371768Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=948.831µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.215639237Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.216506817Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=867.46µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.220236762Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.221104641Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=865.669µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.225095521Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.225724006Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=627.725µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.231538948Z level=info msg="Executing migration" id="Add column with_credentials" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.235588959Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=4.048591ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.239627761Z level=info msg="Executing migration" id="Add secure json data column" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.24221438Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.585489ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.246386254Z level=info msg="Executing migration" id="Update data_source table charset" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.246467216Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=79.442µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.251453499Z level=info msg="Executing migration" id="Update initial version to 1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.251722725Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=268.726µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.257321472Z level=info msg="Executing migration" id="Add read_only data column" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.265355404Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=8.028782ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.270240155Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.270506001Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=265.426µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.274775597Z level=info msg="Executing migration" id="Update json_data with nulls" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.275014722Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=237.365µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.279232118Z level=info msg="Executing migration" id="Add uid column" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.288240872Z level=info msg="Migration successfully executed" id="Add uid column" duration=9.008044ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.291968756Z level=info msg="Executing migration" id="Update uid value" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.292285503Z level=info msg="Migration successfully executed" id="Update uid value" duration=316.697µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.295569949Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.296561511Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=991.452µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.30178691Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.30271095Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=923.31µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.30622463Z level=info msg="Executing migration" id="create api_key table" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.307200492Z level=info msg="Migration successfully executed" id="create api_key table" duration=975.202µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.311868717Z level=info msg="Executing migration" id="add index api_key.account_id" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.312773379Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=906.452µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.319635994Z level=info msg="Executing migration" id="add index api_key.key" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.321402474Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.76435ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.326270224Z level=info msg="Executing migration" id="add index api_key.account_id_name" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.327184195Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=913.321µs 23:16:49 kafka | ===> User 23:16:49 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:49 kafka | ===> Configuring ... 23:16:49 kafka | Running in Zookeeper mode... 23:16:49 kafka | ===> Running preflight checks ... 23:16:49 kafka | ===> Check if /var/lib/kafka/data is writable ... 23:16:49 kafka | ===> Check if Zookeeper is healthy ... 23:16:49 kafka | SLF4J: Class path contains multiple SLF4J bindings. 23:16:49 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 23:16:49 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 23:16:49 kafka | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 23:16:49 kafka | SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] 23:16:49 kafka | [2024-03-10 23:14:14,882] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:16:49 kafka | [2024-03-10 23:14:14,882] INFO Client environment:host.name=d3fc987b292f (org.apache.zookeeper.ZooKeeper) 23:16:49 kafka | [2024-03-10 23:14:14,882] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 23:16:49 kafka | [2024-03-10 23:14:14,882] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:49 kafka | [2024-03-10 23:14:14,882] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.331023371Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.331899762Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=878.921µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.337497879Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.338340678Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=843.518µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.341682063Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.342478211Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=796.458µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.345999682Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.353088922Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=7.08843ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.358419943Z level=info msg="Executing migration" id="create api_key table v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.359241151Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=820.478µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.366844353Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.367724663Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=880.24µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.372302477Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.373133126Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=830.479µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.376545383Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.377450354Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=904.381µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.418534344Z level=info msg="Executing migration" id="copy api_key v1 to v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.419228571Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=693.467µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.423411185Z level=info msg="Executing migration" id="Drop old table api_key_v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.424404238Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=992.413µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.430973656Z level=info msg="Executing migration" id="Update api_key table charset" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.431002607Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=29.861µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.435378916Z level=info msg="Executing migration" id="Add expires to api_key table" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.439190343Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=3.808767ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.444275708Z level=info msg="Executing migration" id="Add service account foreign key" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.446805266Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.529418ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.451821939Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.452056314Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=233.665µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.455594535Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.458170843Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.575968ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.462753326Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.465334485Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.580469ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.470647535Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.472102419Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=1.454374ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.477274495Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.478026123Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=750.788µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.482303009Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.483721652Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.420213ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.488647494Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.489871941Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.227187ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.493301649Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.493886542Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=584.943µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.497151927Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.497804641Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=650.274µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.512127595Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.51233714Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=213.405µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.516370781Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.516462143Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=93.102µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.521865296Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.526318497Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.453612ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.531269109Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 23:16:49 kafka | [2024-03-10 23:14:14,882] INFO Client environment:java.class.path=/usr/share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/share/java/kafka/jersey-common-2.39.1.jar:/usr/share/java/kafka/swagger-annotations-2.2.8.jar:/usr/share/java/kafka/jose4j-0.9.3.jar:/usr/share/java/kafka/commons-validator-1.7.jar:/usr/share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/share/java/kafka/rocksdbjni-7.9.2.jar:/usr/share/java/kafka/jackson-annotations-2.13.5.jar:/usr/share/java/kafka/commons-io-2.11.0.jar:/usr/share/java/kafka/javax.activation-api-1.2.0.jar:/usr/share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/share/java/kafka/commons-cli-1.4.jar:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/share/java/kafka/scala-reflect-2.13.11.jar:/usr/share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/share/java/kafka/jline-3.22.0.jar:/usr/share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/share/java/kafka/hk2-api-2.6.1.jar:/usr/share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/share/java/kafka/kafka.jar:/usr/share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/share/java/kafka/scala-library-2.13.11.jar:/usr/share/java/kafka/jakarta.inject-2.6.1.jar:/usr/share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/share/java/kafka/hk2-locator-2.6.1.jar:/usr/share/java/kafka/reflections-0.10.2.jar:/usr/share/java/kafka/slf4j-api-1.7.36.jar:/usr/share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/share/java/kafka/paranamer-2.8.jar:/usr/share/java/kafka/commons-beanutils-1.9.4.jar:/usr/share/java/kafka/jaxb-api-2.3.1.jar:/usr/share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/share/java/kafka/hk2-utils-2.6.1.jar:/usr/share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/share/java/kafka/reload4j-1.2.25.jar:/usr/share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/share/java/kafka/jackson-core-2.13.5.jar:/usr/share/java/kafka/jersey-hk2-2.39.1.jar:/usr/share/java/kafka/jackson-databind-2.13.5.jar:/usr/share/java/kafka/jersey-client-2.39.1.jar:/usr/share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/share/java/kafka/commons-digester-2.1.jar:/usr/share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/share/java/kafka/argparse4j-0.7.0.jar:/usr/share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/kafka/audience-annotations-0.12.0.jar:/usr/share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/kafka/maven-artifact-3.8.8.jar:/usr/share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/share/java/kafka/jersey-server-2.39.1.jar:/usr/share/java/kafka/commons-lang3-3.8.1.jar:/usr/share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/share/java/kafka/jopt-simple-5.0.4.jar:/usr/share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/share/java/kafka/lz4-java-1.8.0.jar:/usr/share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/share/java/kafka/checker-qual-3.19.0.jar:/usr/share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/share/java/kafka/pcollections-4.0.1.jar:/usr/share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/share/java/kafka/commons-logging-1.2.jar:/usr/share/java/kafka/jsr305-3.0.2.jar:/usr/share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/kafka/metrics-core-2.2.0.jar:/usr/share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/share/java/kafka/commons-collections-3.2.2.jar:/usr/share/java/kafka/javassist-3.29.2-GA.jar:/usr/share/java/kafka/caffeine-2.9.3.jar:/usr/share/java/kafka/plexus-utils-3.3.1.jar:/usr/share/java/kafka/zookeeper-3.8.3.jar:/usr/share/java/kafka/activation-1.1.1.jar:/usr/share/java/kafka/netty-common-4.1.100.Final.jar:/usr/share/java/kafka/metrics-core-4.1.12.1.jar:/usr/share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/share/java/kafka/snappy-java-1.1.10.5.jar:/usr/share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/common-utils-7.6.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/utility-belt-7.6.0.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) 23:16:49 kafka | [2024-03-10 23:14:14,882] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:49 kafka | [2024-03-10 23:14:14,882] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:49 kafka | [2024-03-10 23:14:14,882] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:49 kafka | [2024-03-10 23:14:14,882] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:49 kafka | [2024-03-10 23:14:14,882] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:49 kafka | [2024-03-10 23:14:14,882] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:49 kafka | [2024-03-10 23:14:14,882] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:49 kafka | [2024-03-10 23:14:14,882] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:49 kafka | [2024-03-10 23:14:14,883] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:49 kafka | [2024-03-10 23:14:14,883] INFO Client environment:os.memory.free=487MB (org.apache.zookeeper.ZooKeeper) 23:16:49 kafka | [2024-03-10 23:14:14,883] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 23:16:49 kafka | [2024-03-10 23:14:14,883] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 23:16:49 kafka | [2024-03-10 23:14:14,886] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@184cf7cf (org.apache.zookeeper.ZooKeeper) 23:16:49 kafka | [2024-03-10 23:14:14,891] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:49 kafka | [2024-03-10 23:14:14,896] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:49 kafka | [2024-03-10 23:14:14,904] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:49 kafka | [2024-03-10 23:14:14,922] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 23:16:49 kafka | [2024-03-10 23:14:14,923] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 23:16:49 kafka | [2024-03-10 23:14:14,931] INFO Socket connection established, initiating session, client: /172.17.0.6:42392, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 23:16:49 kafka | [2024-03-10 23:14:14,970] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000003a4e50000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 23:16:49 kafka | [2024-03-10 23:14:15,096] INFO Session: 0x1000003a4e50000 closed (org.apache.zookeeper.ZooKeeper) 23:16:49 kafka | [2024-03-10 23:14:15,096] INFO EventThread shut down for session: 0x1000003a4e50000 (org.apache.zookeeper.ClientCnxn) 23:16:49 kafka | Using log4j config /etc/kafka/log4j.properties 23:16:49 kafka | ===> Launching ... 23:16:49 kafka | ===> Launching kafka ... 23:16:49 kafka | [2024-03-10 23:14:15,872] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 23:16:49 kafka | [2024-03-10 23:14:16,210] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:49 kafka | [2024-03-10 23:14:16,285] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 23:16:49 kafka | [2024-03-10 23:14:16,286] INFO starting (kafka.server.KafkaServer) 23:16:49 kafka | [2024-03-10 23:14:16,286] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 23:16:49 kafka | [2024-03-10 23:14:16,299] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 23:16:49 kafka | [2024-03-10 23:14:16,303] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:16:49 kafka | [2024-03-10 23:14:16,303] INFO Client environment:host.name=d3fc987b292f (org.apache.zookeeper.ZooKeeper) 23:16:49 kafka | [2024-03-10 23:14:16,303] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 23:16:49 kafka | [2024-03-10 23:14:16,303] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:49 kafka | [2024-03-10 23:14:16,303] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:49 mariadb | 2024-03-10 23:14:13+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:49 mariadb | 2024-03-10 23:14:13+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 23:16:49 mariadb | 2024-03-10 23:14:13+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:49 mariadb | 2024-03-10 23:14:13+00:00 [Note] [Entrypoint]: Initializing database files 23:16:49 mariadb | 2024-03-10 23:14:13 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:49 mariadb | 2024-03-10 23:14:13 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:49 mariadb | 2024-03-10 23:14:13 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:49 mariadb | 23:16:49 mariadb | 23:16:49 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 23:16:49 mariadb | To do so, start the server, then issue the following command: 23:16:49 mariadb | 23:16:49 mariadb | '/usr/bin/mysql_secure_installation' 23:16:49 mariadb | 23:16:49 mariadb | which will also give you the option of removing the test 23:16:49 mariadb | databases and anonymous user created by default. This is 23:16:49 mariadb | strongly recommended for production servers. 23:16:49 mariadb | 23:16:49 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 23:16:49 mariadb | 23:16:49 mariadb | Please report any problems at https://mariadb.org/jira 23:16:49 mariadb | 23:16:49 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 23:16:49 mariadb | 23:16:49 mariadb | Consider joining MariaDB's strong and vibrant community: 23:16:49 mariadb | https://mariadb.org/get-involved/ 23:16:49 mariadb | 23:16:49 mariadb | 2024-03-10 23:14:15+00:00 [Note] [Entrypoint]: Database files initialized 23:16:49 mariadb | 2024-03-10 23:14:15+00:00 [Note] [Entrypoint]: Starting temporary server 23:16:49 mariadb | 2024-03-10 23:14:15+00:00 [Note] [Entrypoint]: Waiting for server startup 23:16:49 mariadb | 2024-03-10 23:14:15 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 99 ... 23:16:49 mariadb | 2024-03-10 23:14:15 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:49 mariadb | 2024-03-10 23:14:15 0 [Note] InnoDB: Number of transaction pools: 1 23:16:49 mariadb | 2024-03-10 23:14:15 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:49 mariadb | 2024-03-10 23:14:15 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.533890899Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.6217ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.538160085Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.538251897Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=91.792µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.545771508Z level=info msg="Executing migration" id="create quota table v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.546565406Z level=info msg="Migration successfully executed" id="create quota table v1" duration=794.138µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.551520438Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.557505794Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=5.983985ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.566985198Z level=info msg="Executing migration" id="Update quota table charset" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.56706391Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=80.072µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.576049834Z level=info msg="Executing migration" id="create plugin_setting table" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.577356693Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.306989ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.583893382Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.584776092Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=879.58µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.588576568Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.59310133Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.521452ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.600900417Z level=info msg="Executing migration" id="Update plugin_setting table charset" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.60101016Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=111.853µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.607247801Z level=info msg="Executing migration" id="create session table" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.608688643Z level=info msg="Migration successfully executed" id="create session table" duration=1.439942ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.614005873Z level=info msg="Executing migration" id="Drop old table playlist table" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.614121826Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=114.283µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.61869251Z level=info msg="Executing migration" id="Drop old table playlist_item table" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.618805402Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=112.532µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.623958169Z level=info msg="Executing migration" id="create playlist table v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.625096675Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.137986ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.632067254Z level=info msg="Executing migration" id="create playlist item table v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.633224999Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.157906ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.642028329Z level=info msg="Executing migration" id="Update playlist table charset" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.64210162Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=80.132µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.647784279Z level=info msg="Executing migration" id="Update playlist_item table charset" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.647806549Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=23.42µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.654935431Z level=info msg="Executing migration" id="Add playlist column created_at" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.659054174Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.117503ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.665876859Z level=info msg="Executing migration" id="Add playlist column updated_at" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.670088054Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=4.213645ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.674552256Z level=info msg="Executing migration" id="drop preferences table v2" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.674677779Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=125.463µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.6800299Z level=info msg="Executing migration" id="drop preferences table v3" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.680144453Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=115.423µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.68486204Z level=info msg="Executing migration" id="create preferences table v3" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.685723669Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=861.379µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.688926832Z level=info msg="Executing migration" id="Update preferences table charset" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.688957692Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=29.151µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.692454711Z level=info msg="Executing migration" id="Add column team_id in preferences" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.695615683Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.160572ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.69990733Z level=info msg="Executing migration" id="Update team_id column values in preferences" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.700235188Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=332.948µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.703522292Z level=info msg="Executing migration" id="Add column week_start in preferences" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.706837387Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.314835ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.712220399Z level=info msg="Executing migration" id="Add column preferences.json_data" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.715212986Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.989267ms 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.719000653Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.719096745Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=100.822µs 23:16:49 grafana | logger=migrator t=2024-03-10T23:14:13.723077725Z level=info msg="Executing migration" id="Add preferences index org_id" 23:16:50 kafka | [2024-03-10 23:14:16,303] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 23:16:50 kafka | [2024-03-10 23:14:16,304] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:50 kafka | [2024-03-10 23:14:16,304] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:50 kafka | [2024-03-10 23:14:16,304] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:50 kafka | [2024-03-10 23:14:16,304] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:50 kafka | [2024-03-10 23:14:16,304] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:50 kafka | [2024-03-10 23:14:16,304] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:50 kafka | [2024-03-10 23:14:16,304] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:50 kafka | [2024-03-10 23:14:16,304] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:50 kafka | [2024-03-10 23:14:16,304] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:50 kafka | [2024-03-10 23:14:16,304] INFO Client environment:os.memory.free=1007MB (org.apache.zookeeper.ZooKeeper) 23:16:50 kafka | [2024-03-10 23:14:16,304] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:50 kafka | [2024-03-10 23:14:16,304] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:50 kafka | [2024-03-10 23:14:16,306] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@5b619d14 (org.apache.zookeeper.ZooKeeper) 23:16:50 kafka | [2024-03-10 23:14:16,310] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:50 kafka | [2024-03-10 23:14:16,316] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:50 mariadb | 2024-03-10 23:14:15 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:50 mariadb | 2024-03-10 23:14:15 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:50 mariadb | 2024-03-10 23:14:15 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:50 mariadb | 2024-03-10 23:14:15 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:50 mariadb | 2024-03-10 23:14:15 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:50 mariadb | 2024-03-10 23:14:15 0 [Note] InnoDB: 128 rollback segments are active. 23:16:50 mariadb | 2024-03-10 23:14:15 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:50 mariadb | 2024-03-10 23:14:15 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:50 mariadb | 2024-03-10 23:14:15 0 [Note] InnoDB: log sequence number 46590; transaction id 14 23:16:50 mariadb | 2024-03-10 23:14:15 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:50 mariadb | 2024-03-10 23:14:15 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:50 mariadb | 2024-03-10 23:14:15 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 23:16:50 mariadb | 2024-03-10 23:14:15 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 23:16:50 mariadb | 2024-03-10 23:14:15 0 [Note] mariadbd: ready for connections. 23:16:50 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 23:16:50 mariadb | 2024-03-10 23:14:16+00:00 [Note] [Entrypoint]: Temporary server started. 23:16:50 mariadb | 2024-03-10 23:14:18+00:00 [Note] [Entrypoint]: Creating user policy_user 23:16:50 mariadb | 2024-03-10 23:14:18+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 23:16:50 mariadb | 23:16:50 mariadb | 2024-03-10 23:14:18+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 23:16:50 mariadb | 23:16:50 mariadb | 2024-03-10 23:14:18+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 23:16:50 mariadb | #!/bin/bash -xv 23:16:50 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 23:16:50 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 23:16:50 mariadb | # 23:16:50 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 23:16:50 mariadb | # you may not use this file except in compliance with the License. 23:16:50 mariadb | # You may obtain a copy of the License at 23:16:50 mariadb | # 23:16:50 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 23:16:50 mariadb | # 23:16:50 mariadb | # Unless required by applicable law or agreed to in writing, software 23:16:50 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 23:16:50 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 23:16:50 mariadb | # See the License for the specific language governing permissions and 23:16:50 mariadb | # limitations under the License. 23:16:50 mariadb | 23:16:50 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:50 mariadb | do 23:16:50 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 23:16:50 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 23:16:50 mariadb | done 23:16:50 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:50 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 23:16:50 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:50 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:50 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 23:16:50 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:50 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:50 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 23:16:50 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:50 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:50 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 23:16:50 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:50 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:50 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 23:16:50 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:50 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:50 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 23:16:50 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:50 mariadb | 23:16:50 kafka | [2024-03-10 23:14:16,318] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 23:16:50 kafka | [2024-03-10 23:14:16,324] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 23:16:50 kafka | [2024-03-10 23:14:16,332] INFO Socket connection established, initiating session, client: /172.17.0.6:45170, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 23:16:50 kafka | [2024-03-10 23:14:16,342] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000003a4e50001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 23:16:50 kafka | [2024-03-10 23:14:16,348] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 23:16:50 kafka | [2024-03-10 23:14:16,727] INFO Cluster ID = dVKmUcACQYWhG0JC5XUpMQ (kafka.server.KafkaServer) 23:16:50 kafka | [2024-03-10 23:14:16,730] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 23:16:50 kafka | [2024-03-10 23:14:16,786] INFO KafkaConfig values: 23:16:50 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 23:16:50 kafka | alter.config.policy.class.name = null 23:16:50 kafka | alter.log.dirs.replication.quota.window.num = 11 23:16:50 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 23:16:50 kafka | authorizer.class.name = 23:16:50 kafka | auto.create.topics.enable = true 23:16:50 kafka | auto.include.jmx.reporter = true 23:16:50 kafka | auto.leader.rebalance.enable = true 23:16:50 kafka | background.threads = 10 23:16:50 kafka | broker.heartbeat.interval.ms = 2000 23:16:50 kafka | broker.id = 1 23:16:50 kafka | broker.id.generation.enable = true 23:16:50 kafka | broker.rack = null 23:16:50 kafka | broker.session.timeout.ms = 9000 23:16:50 kafka | client.quota.callback.class = null 23:16:50 kafka | compression.type = producer 23:16:50 kafka | connection.failed.authentication.delay.ms = 100 23:16:50 kafka | connections.max.idle.ms = 600000 23:16:50 kafka | connections.max.reauth.ms = 0 23:16:50 kafka | control.plane.listener.name = null 23:16:50 kafka | controlled.shutdown.enable = true 23:16:50 kafka | controlled.shutdown.max.retries = 3 23:16:50 kafka | controlled.shutdown.retry.backoff.ms = 5000 23:16:50 kafka | controller.listener.names = null 23:16:50 kafka | controller.quorum.append.linger.ms = 25 23:16:50 kafka | controller.quorum.election.backoff.max.ms = 1000 23:16:50 kafka | controller.quorum.election.timeout.ms = 1000 23:16:50 kafka | controller.quorum.fetch.timeout.ms = 2000 23:16:50 kafka | controller.quorum.request.timeout.ms = 2000 23:16:50 kafka | controller.quorum.retry.backoff.ms = 20 23:16:50 kafka | controller.quorum.voters = [] 23:16:50 kafka | controller.quota.window.num = 11 23:16:50 kafka | controller.quota.window.size.seconds = 1 23:16:50 kafka | controller.socket.timeout.ms = 30000 23:16:50 kafka | create.topic.policy.class.name = null 23:16:50 kafka | default.replication.factor = 1 23:16:50 kafka | delegation.token.expiry.check.interval.ms = 3600000 23:16:50 kafka | delegation.token.expiry.time.ms = 86400000 23:16:50 kafka | delegation.token.master.key = null 23:16:50 kafka | delegation.token.max.lifetime.ms = 604800000 23:16:50 kafka | delegation.token.secret.key = null 23:16:50 kafka | delete.records.purgatory.purge.interval.requests = 1 23:16:50 kafka | delete.topic.enable = true 23:16:50 kafka | early.start.listeners = null 23:16:50 kafka | fetch.max.bytes = 57671680 23:16:50 kafka | fetch.purgatory.purge.interval.requests = 1000 23:16:50 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 23:16:50 kafka | group.consumer.heartbeat.interval.ms = 5000 23:16:50 kafka | group.consumer.max.heartbeat.interval.ms = 15000 23:16:50 kafka | group.consumer.max.session.timeout.ms = 60000 23:16:50 kafka | group.consumer.max.size = 2147483647 23:16:50 kafka | group.consumer.min.heartbeat.interval.ms = 5000 23:16:50 kafka | group.consumer.min.session.timeout.ms = 45000 23:16:50 kafka | group.consumer.session.timeout.ms = 45000 23:16:50 kafka | group.coordinator.new.enable = false 23:16:50 kafka | group.coordinator.threads = 1 23:16:50 kafka | group.initial.rebalance.delay.ms = 3000 23:16:50 kafka | group.max.session.timeout.ms = 1800000 23:16:50 kafka | group.max.size = 2147483647 23:16:50 kafka | group.min.session.timeout.ms = 6000 23:16:50 kafka | initial.broker.registration.timeout.ms = 60000 23:16:50 kafka | inter.broker.listener.name = PLAINTEXT 23:16:50 kafka | inter.broker.protocol.version = 3.6-IV2 23:16:50 kafka | kafka.metrics.polling.interval.secs = 10 23:16:50 kafka | kafka.metrics.reporters = [] 23:16:50 kafka | leader.imbalance.check.interval.seconds = 300 23:16:50 kafka | leader.imbalance.per.broker.percentage = 10 23:16:50 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 23:16:50 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 23:16:50 kafka | log.cleaner.backoff.ms = 15000 23:16:50 kafka | log.cleaner.dedupe.buffer.size = 134217728 23:16:50 kafka | log.cleaner.delete.retention.ms = 86400000 23:16:50 kafka | log.cleaner.enable = true 23:16:50 kafka | log.cleaner.io.buffer.load.factor = 0.9 23:16:50 kafka | log.cleaner.io.buffer.size = 524288 23:16:50 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 23:16:50 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 23:16:50 kafka | log.cleaner.min.cleanable.ratio = 0.5 23:16:50 kafka | log.cleaner.min.compaction.lag.ms = 0 23:16:50 kafka | log.cleaner.threads = 1 23:16:50 kafka | log.cleanup.policy = [delete] 23:16:50 kafka | log.dir = /tmp/kafka-logs 23:16:50 kafka | log.dirs = /var/lib/kafka/data 23:16:50 kafka | log.flush.interval.messages = 9223372036854775807 23:16:50 kafka | log.flush.interval.ms = null 23:16:50 kafka | log.flush.offset.checkpoint.interval.ms = 60000 23:16:50 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 23:16:50 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 23:16:50 kafka | log.index.interval.bytes = 4096 23:16:50 kafka | log.index.size.max.bytes = 10485760 23:16:50 kafka | log.local.retention.bytes = -2 23:16:50 kafka | log.local.retention.ms = -2 23:16:50 kafka | log.message.downconversion.enable = true 23:16:50 kafka | log.message.format.version = 3.0-IV1 23:16:50 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 23:16:50 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 23:16:50 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 23:16:50 kafka | log.message.timestamp.type = CreateTime 23:16:50 kafka | log.preallocate = false 23:16:50 kafka | log.retention.bytes = -1 23:16:50 kafka | log.retention.check.interval.ms = 300000 23:16:50 kafka | log.retention.hours = 168 23:16:50 kafka | log.retention.minutes = null 23:16:50 kafka | log.retention.ms = null 23:16:50 kafka | log.roll.hours = 168 23:16:50 kafka | log.roll.jitter.hours = 0 23:16:50 kafka | log.roll.jitter.ms = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.724282633Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.204468ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.730975654Z level=info msg="Executing migration" id="Add preferences index user_id" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.732708744Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.73782ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.776953516Z level=info msg="Executing migration" id="create alert table v1" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.778764857Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.807731ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.782720466Z level=info msg="Executing migration" id="add index alert org_id & id " 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.783585497Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=864.221µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.789534381Z level=info msg="Executing migration" id="add index alert state" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.79081718Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.282029ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.79431576Z level=info msg="Executing migration" id="add index alert dashboard_id" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.795577888Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.261958ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.79922204Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.799893196Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=670.876µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.805945283Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.807264832Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.319069ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.811038339Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.812401429Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.362301ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.815917118Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.830310545Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=14.382706ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.833912997Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.834630053Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=717.246µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.84069827Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.841563809Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=865.499µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.846499612Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.846782618Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=282.637µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.850170225Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.850675397Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=505.252µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.856596731Z level=info msg="Executing migration" id="create alert_notification table v1" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.857382648Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=785.147µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.862146406Z level=info msg="Executing migration" id="Add column is_default" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.865128624Z level=info msg="Migration successfully executed" id="Add column is_default" duration=2.987288ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.876168064Z level=info msg="Executing migration" id="Add column frequency" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.880525082Z level=info msg="Migration successfully executed" id="Add column frequency" duration=4.361198ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.884401761Z level=info msg="Executing migration" id="Add column send_reminder" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.887983632Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.578931ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.893675751Z level=info msg="Executing migration" id="Add column disable_resolve_message" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.897541618Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.865707ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.901025217Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.901970229Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=944.831µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.910567693Z level=info msg="Executing migration" id="Update alert table charset" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.910611794Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=49.621µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.915027495Z level=info msg="Executing migration" id="Update alert_notification table charset" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.915048925Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=22.01µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.921143463Z level=info msg="Executing migration" id="create notification_journal table v1" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.921727876Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=584.063µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.925046941Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.926794401Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.74942ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.930920704Z level=info msg="Executing migration" id="drop alert_notification_journal" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.932116972Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.195078ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.939281224Z level=info msg="Executing migration" id="create alert_notification_state table v1" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.940136464Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=855.78µs 23:16:50 kafka | log.roll.ms = null 23:16:50 kafka | log.segment.bytes = 1073741824 23:16:50 kafka | log.segment.delete.delay.ms = 60000 23:16:50 kafka | max.connection.creation.rate = 2147483647 23:16:50 kafka | max.connections = 2147483647 23:16:50 kafka | max.connections.per.ip = 2147483647 23:16:50 kafka | max.connections.per.ip.overrides = 23:16:50 kafka | max.incremental.fetch.session.cache.slots = 1000 23:16:50 kafka | message.max.bytes = 1048588 23:16:50 kafka | metadata.log.dir = null 23:16:50 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 23:16:50 kafka | metadata.log.max.snapshot.interval.ms = 3600000 23:16:50 kafka | metadata.log.segment.bytes = 1073741824 23:16:50 kafka | metadata.log.segment.min.bytes = 8388608 23:16:50 kafka | metadata.log.segment.ms = 604800000 23:16:50 kafka | metadata.max.idle.interval.ms = 500 23:16:50 kafka | metadata.max.retention.bytes = 104857600 23:16:50 kafka | metadata.max.retention.ms = 604800000 23:16:50 kafka | metric.reporters = [] 23:16:50 kafka | metrics.num.samples = 2 23:16:50 kafka | metrics.recording.level = INFO 23:16:50 kafka | metrics.sample.window.ms = 30000 23:16:50 kafka | min.insync.replicas = 1 23:16:50 kafka | node.id = 1 23:16:50 kafka | num.io.threads = 8 23:16:50 kafka | num.network.threads = 3 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.949484145Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.951072382Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.587746ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.955672445Z level=info msg="Executing migration" id="Add for to alert table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.961539358Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=5.865773ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.967782531Z level=info msg="Executing migration" id="Add column uid in alert_notification" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.972446676Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=4.679565ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.978126024Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.978448891Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=323.057µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.985543912Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.986478923Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=935.111µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.991863735Z level=info msg="Executing migration" id="Remove unique index org_id_name" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.993350299Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.485864ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:13.99912269Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.005550476Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=6.380174ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.010635351Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.010725213Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=90.412µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.017591469Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.019037562Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.446153ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.024335833Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.026195644Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.856851ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.03081532Z level=info msg="Executing migration" id="Drop old annotation table v4" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.0312761Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=462.74µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.036905278Z level=info msg="Executing migration" id="create annotation table v5" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.038366002Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.459844ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.042596437Z level=info msg="Executing migration" id="add index annotation 0 v3" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.044046361Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.449184ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.049930765Z level=info msg="Executing migration" id="add index annotation 1 v3" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.051429808Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.491173ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.056878372Z level=info msg="Executing migration" id="add index annotation 2 v3" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.057815224Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=936.702µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.062639093Z level=info msg="Executing migration" id="add index annotation 3 v3" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.063682458Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.042964ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.067017003Z level=info msg="Executing migration" id="add index annotation 4 v3" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.068038246Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.015363ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.076379916Z level=info msg="Executing migration" id="Update annotation table charset" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.076424607Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=46.371µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.083809525Z level=info msg="Executing migration" id="Add column region_id to annotation table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.09017728Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=6.369375ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.095592013Z level=info msg="Executing migration" id="Drop category_id index" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.096484903Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=892.48µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.10290846Z level=info msg="Executing migration" id="Add column tags to annotation table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.109769946Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=6.860306ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.114878692Z level=info msg="Executing migration" id="Create annotation_tag table v2" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.115615828Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=735.446µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.161343229Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.162791832Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.448133ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.168760308Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.170793684Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=2.033757ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.176551855Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.188983128Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=12.416592ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.192714863Z level=info msg="Executing migration" id="Create annotation_tag table v3" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.193289006Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=574.093µs 23:16:50 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 23:16:50 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 23:16:50 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 23:16:50 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 23:16:50 mariadb | 23:16:50 mariadb | 2024-03-10 23:14:19+00:00 [Note] [Entrypoint]: Stopping temporary server 23:16:50 mariadb | 2024-03-10 23:14:19 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 23:16:50 mariadb | 2024-03-10 23:14:19 0 [Note] InnoDB: FTS optimize thread exiting. 23:16:50 mariadb | 2024-03-10 23:14:19 0 [Note] InnoDB: Starting shutdown... 23:16:50 mariadb | 2024-03-10 23:14:19 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 23:16:50 mariadb | 2024-03-10 23:14:19 0 [Note] InnoDB: Buffer pool(s) dump completed at 240310 23:14:19 23:16:50 mariadb | 2024-03-10 23:14:19 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 23:16:50 mariadb | 2024-03-10 23:14:19 0 [Note] InnoDB: Shutdown completed; log sequence number 328510; transaction id 298 23:16:50 mariadb | 2024-03-10 23:14:20 0 [Note] mariadbd: Shutdown complete 23:16:50 mariadb | 23:16:50 mariadb | 2024-03-10 23:14:20+00:00 [Note] [Entrypoint]: Temporary server stopped 23:16:50 mariadb | 23:16:50 mariadb | 2024-03-10 23:14:20+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 23:16:50 mariadb | 23:16:50 mariadb | 2024-03-10 23:14:20 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 23:16:50 mariadb | 2024-03-10 23:14:20 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:50 mariadb | 2024-03-10 23:14:20 0 [Note] InnoDB: Number of transaction pools: 1 23:16:50 mariadb | 2024-03-10 23:14:20 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:50 mariadb | 2024-03-10 23:14:20 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:50 mariadb | 2024-03-10 23:14:20 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:50 mariadb | 2024-03-10 23:14:20 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:50 mariadb | 2024-03-10 23:14:20 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:50 mariadb | 2024-03-10 23:14:20 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:50 mariadb | 2024-03-10 23:14:20 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:50 mariadb | 2024-03-10 23:14:20 0 [Note] InnoDB: 128 rollback segments are active. 23:16:50 mariadb | 2024-03-10 23:14:20 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:50 mariadb | 2024-03-10 23:14:20 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:50 mariadb | 2024-03-10 23:14:20 0 [Note] InnoDB: log sequence number 328510; transaction id 299 23:16:50 mariadb | 2024-03-10 23:14:20 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:50 mariadb | 2024-03-10 23:14:20 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 23:16:50 mariadb | 2024-03-10 23:14:20 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:50 mariadb | 2024-03-10 23:14:20 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 23:16:50 mariadb | 2024-03-10 23:14:20 0 [Note] Server socket created on IP: '0.0.0.0'. 23:16:50 mariadb | 2024-03-10 23:14:20 0 [Note] Server socket created on IP: '::'. 23:16:50 mariadb | 2024-03-10 23:14:20 0 [Note] mariadbd: ready for connections. 23:16:50 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.19919092Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.200690505Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.499405ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.206514417Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.207028048Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=514.141µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.209859273Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.210404805Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=545.372µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.216844041Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.217039167Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=195.435µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.222277535Z level=info msg="Executing migration" id="Add created time to annotation table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.226919081Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.640856ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.231926275Z level=info msg="Executing migration" id="Add updated time to annotation table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.236467417Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.540342ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.243712923Z level=info msg="Executing migration" id="Add index for created in annotation table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.244819568Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.100464ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.248153624Z level=info msg="Executing migration" id="Add index for updated in annotation table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.24929641Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.141956ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.253198098Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.253481796Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=284.368µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.259006381Z level=info msg="Executing migration" id="Add epoch_end column" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.266846699Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=7.839659ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.270635356Z level=info msg="Executing migration" id="Add index for epoch_end" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.27129244Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=656.434µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.275553087Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.27569207Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=137.963µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.280851327Z level=info msg="Executing migration" id="Move region to single row" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.281257266Z level=info msg="Migration successfully executed" id="Move region to single row" duration=371.378µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.286738561Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.288617144Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.875523ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.294295323Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.295148413Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=852.82µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.303920663Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.304968086Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.046833ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.310575084Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.311613717Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.038563ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.316263723Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.317163693Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=900.23µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.323832236Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.324833838Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.000972ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.32972497Z level=info msg="Executing migration" id="Increase tags column to length 4096" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.329931794Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=206.655µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.334197431Z level=info msg="Executing migration" id="create test_data table" 23:16:50 kafka | num.partitions = 1 23:16:50 kafka | num.recovery.threads.per.data.dir = 1 23:16:50 kafka | num.replica.alter.log.dirs.threads = null 23:16:50 kafka | num.replica.fetchers = 1 23:16:50 kafka | offset.metadata.max.bytes = 4096 23:16:50 kafka | offsets.commit.required.acks = -1 23:16:50 kafka | offsets.commit.timeout.ms = 5000 23:16:50 kafka | offsets.load.buffer.size = 5242880 23:16:50 kafka | offsets.retention.check.interval.ms = 600000 23:16:50 kafka | offsets.retention.minutes = 10080 23:16:50 kafka | offsets.topic.compression.codec = 0 23:16:50 kafka | offsets.topic.num.partitions = 50 23:16:50 kafka | offsets.topic.replication.factor = 1 23:16:50 kafka | offsets.topic.segment.bytes = 104857600 23:16:50 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 23:16:50 kafka | password.encoder.iterations = 4096 23:16:50 kafka | password.encoder.key.length = 128 23:16:50 kafka | password.encoder.keyfactory.algorithm = null 23:16:50 kafka | password.encoder.old.secret = null 23:16:50 kafka | password.encoder.secret = null 23:16:50 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 23:16:50 kafka | process.roles = [] 23:16:50 kafka | producer.id.expiration.check.interval.ms = 600000 23:16:50 kafka | producer.id.expiration.ms = 86400000 23:16:50 kafka | producer.purgatory.purge.interval.requests = 1000 23:16:50 kafka | queued.max.request.bytes = -1 23:16:50 kafka | queued.max.requests = 500 23:16:50 kafka | quota.window.num = 11 23:16:50 kafka | quota.window.size.seconds = 1 23:16:50 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 23:16:50 kafka | remote.log.manager.task.interval.ms = 30000 23:16:50 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 23:16:50 kafka | remote.log.manager.task.retry.backoff.ms = 500 23:16:50 kafka | remote.log.manager.task.retry.jitter = 0.2 23:16:50 kafka | remote.log.manager.thread.pool.size = 10 23:16:50 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 23:16:50 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 23:16:50 kafka | remote.log.metadata.manager.class.path = null 23:16:50 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 23:16:50 kafka | remote.log.metadata.manager.listener.name = null 23:16:50 kafka | remote.log.reader.max.pending.tasks = 100 23:16:50 kafka | remote.log.reader.threads = 10 23:16:50 kafka | remote.log.storage.manager.class.name = null 23:16:50 kafka | remote.log.storage.manager.class.path = null 23:16:50 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 23:16:50 kafka | remote.log.storage.system.enable = false 23:16:50 kafka | replica.fetch.backoff.ms = 1000 23:16:50 kafka | replica.fetch.max.bytes = 1048576 23:16:50 kafka | replica.fetch.min.bytes = 1 23:16:50 kafka | replica.fetch.response.max.bytes = 10485760 23:16:50 kafka | replica.fetch.wait.max.ms = 500 23:16:50 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 23:16:50 kafka | replica.lag.time.max.ms = 30000 23:16:50 kafka | replica.selector.class = null 23:16:50 kafka | replica.socket.receive.buffer.bytes = 65536 23:16:50 kafka | replica.socket.timeout.ms = 30000 23:16:50 kafka | replication.quota.window.num = 11 23:16:50 kafka | replication.quota.window.size.seconds = 1 23:16:50 kafka | request.timeout.ms = 30000 23:16:50 kafka | reserved.broker.max.id = 1000 23:16:50 kafka | sasl.client.callback.handler.class = null 23:16:50 kafka | sasl.enabled.mechanisms = [GSSAPI] 23:16:50 kafka | sasl.jaas.config = null 23:16:50 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:50 kafka | sasl.kerberos.min.time.before.relogin = 60000 23:16:50 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 23:16:50 kafka | sasl.kerberos.service.name = null 23:16:50 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:50 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:50 kafka | sasl.login.callback.handler.class = null 23:16:50 kafka | sasl.login.class = null 23:16:50 kafka | sasl.login.connect.timeout.ms = null 23:16:50 kafka | sasl.login.read.timeout.ms = null 23:16:50 kafka | sasl.login.refresh.buffer.seconds = 300 23:16:50 kafka | sasl.login.refresh.min.period.seconds = 60 23:16:50 kafka | sasl.login.refresh.window.factor = 0.8 23:16:50 kafka | sasl.login.refresh.window.jitter = 0.05 23:16:50 kafka | sasl.login.retry.backoff.max.ms = 10000 23:16:50 kafka | sasl.login.retry.backoff.ms = 100 23:16:50 kafka | sasl.mechanism.controller.protocol = GSSAPI 23:16:50 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 23:16:50 kafka | sasl.oauthbearer.clock.skew.seconds = 30 23:16:50 kafka | sasl.oauthbearer.expected.audience = null 23:16:50 kafka | sasl.oauthbearer.expected.issuer = null 23:16:50 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:50 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:50 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:50 kafka | sasl.oauthbearer.jwks.endpoint.url = null 23:16:50 kafka | sasl.oauthbearer.scope.claim.name = scope 23:16:50 kafka | sasl.oauthbearer.sub.claim.name = sub 23:16:50 kafka | sasl.oauthbearer.token.endpoint.url = null 23:16:50 kafka | sasl.server.callback.handler.class = null 23:16:50 kafka | sasl.server.max.receive.size = 524288 23:16:50 kafka | security.inter.broker.protocol = PLAINTEXT 23:16:50 kafka | security.providers = null 23:16:50 kafka | server.max.startup.time.ms = 9223372036854775807 23:16:50 kafka | socket.connection.setup.timeout.max.ms = 30000 23:16:50 kafka | socket.connection.setup.timeout.ms = 10000 23:16:50 kafka | socket.listen.backlog.size = 50 23:16:50 kafka | socket.receive.buffer.bytes = 102400 23:16:50 kafka | socket.request.max.bytes = 104857600 23:16:50 kafka | socket.send.buffer.bytes = 102400 23:16:50 kafka | ssl.cipher.suites = [] 23:16:50 kafka | ssl.client.auth = none 23:16:50 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:50 kafka | ssl.endpoint.identification.algorithm = https 23:16:50 kafka | ssl.engine.factory.class = null 23:16:50 kafka | ssl.key.password = null 23:16:50 kafka | ssl.keymanager.algorithm = SunX509 23:16:50 kafka | ssl.keystore.certificate.chain = null 23:16:50 kafka | ssl.keystore.key = null 23:16:50 kafka | ssl.keystore.location = null 23:16:50 kafka | ssl.keystore.password = null 23:16:50 kafka | ssl.keystore.type = JKS 23:16:50 kafka | ssl.principal.mapping.rules = DEFAULT 23:16:50 kafka | ssl.protocol = TLSv1.3 23:16:50 kafka | ssl.provider = null 23:16:50 kafka | ssl.secure.random.implementation = null 23:16:50 kafka | ssl.trustmanager.algorithm = PKIX 23:16:50 kafka | ssl.truststore.certificates = null 23:16:50 kafka | ssl.truststore.location = null 23:16:50 kafka | ssl.truststore.password = null 23:16:50 kafka | ssl.truststore.type = JKS 23:16:50 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 23:16:50 kafka | transaction.max.timeout.ms = 900000 23:16:50 kafka | transaction.partition.verification.enable = true 23:16:50 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 23:16:50 kafka | transaction.state.log.load.buffer.size = 5242880 23:16:50 kafka | transaction.state.log.min.isr = 2 23:16:50 kafka | transaction.state.log.num.partitions = 50 23:16:50 kafka | transaction.state.log.replication.factor = 3 23:16:50 kafka | transaction.state.log.segment.bytes = 104857600 23:16:50 kafka | transactional.id.expiration.ms = 604800000 23:16:50 kafka | unclean.leader.election.enable = false 23:16:50 kafka | unstable.api.versions.enable = false 23:16:50 kafka | zookeeper.clientCnxnSocket = null 23:16:50 kafka | zookeeper.connect = zookeeper:2181 23:16:50 kafka | zookeeper.connection.timeout.ms = null 23:16:50 kafka | zookeeper.max.in.flight.requests = 10 23:16:50 kafka | zookeeper.metadata.migration.enable = false 23:16:50 kafka | zookeeper.session.timeout.ms = 18000 23:16:50 kafka | zookeeper.set.acl = false 23:16:50 kafka | zookeeper.ssl.cipher.suites = null 23:16:50 kafka | zookeeper.ssl.client.enable = false 23:16:50 kafka | zookeeper.ssl.crl.enable = false 23:16:50 kafka | zookeeper.ssl.enabled.protocols = null 23:16:50 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 23:16:50 kafka | zookeeper.ssl.keystore.location = null 23:16:50 kafka | zookeeper.ssl.keystore.password = null 23:16:50 kafka | zookeeper.ssl.keystore.type = null 23:16:50 kafka | zookeeper.ssl.ocsp.enable = false 23:16:50 kafka | zookeeper.ssl.protocol = TLSv1.2 23:16:50 kafka | zookeeper.ssl.truststore.location = null 23:16:50 kafka | zookeeper.ssl.truststore.password = null 23:16:50 kafka | zookeeper.ssl.truststore.type = null 23:16:50 kafka | (kafka.server.KafkaConfig) 23:16:50 kafka | [2024-03-10 23:14:16,824] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:50 kafka | [2024-03-10 23:14:16,825] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:50 kafka | [2024-03-10 23:14:16,827] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:50 kafka | [2024-03-10 23:14:16,831] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:50 kafka | [2024-03-10 23:14:16,869] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 23:16:50 kafka | [2024-03-10 23:14:16,877] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 23:16:50 kafka | [2024-03-10 23:14:16,887] INFO Loaded 0 logs in 17ms (kafka.log.LogManager) 23:16:50 kafka | [2024-03-10 23:14:16,889] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 23:16:50 kafka | [2024-03-10 23:14:16,890] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 23:16:50 kafka | [2024-03-10 23:14:16,902] INFO Starting the log cleaner (kafka.log.LogCleaner) 23:16:50 kafka | [2024-03-10 23:14:16,995] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 23:16:50 kafka | [2024-03-10 23:14:17,014] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 23:16:50 kafka | [2024-03-10 23:14:17,035] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 23:16:50 kafka | [2024-03-10 23:14:17,067] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:50 kafka | [2024-03-10 23:14:17,442] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:50 kafka | [2024-03-10 23:14:17,465] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 23:16:50 kafka | [2024-03-10 23:14:17,466] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:50 kafka | [2024-03-10 23:14:17,472] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 23:16:50 kafka | [2024-03-10 23:14:17,476] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:50 kafka | [2024-03-10 23:14:17,501] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:50 kafka | [2024-03-10 23:14:17,503] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:50 kafka | [2024-03-10 23:14:17,506] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:50 mariadb | 2024-03-10 23:14:20 0 [Note] InnoDB: Buffer pool(s) load completed at 240310 23:14:20 23:16:50 mariadb | 2024-03-10 23:14:20 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) 23:16:50 mariadb | 2024-03-10 23:14:20 7 [Warning] Aborted connection 7 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) 23:16:50 mariadb | 2024-03-10 23:14:20 10 [Warning] Aborted connection 10 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 23:16:50 mariadb | 2024-03-10 23:14:20 15 [Warning] Aborted connection 15 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.335490971Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.29347ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.344875874Z level=info msg="Executing migration" id="create dashboard_version table v1" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.346110293Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.231169ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.355049696Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.356090599Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.040453ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.360481249Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.361570004Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.088345ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.36801975Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.368253686Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=234.115µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.373322121Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.374021108Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=698.206µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.379045461Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.379218375Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=175.184µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.383070563Z level=info msg="Executing migration" id="create team table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.383866711Z level=info msg="Migration successfully executed" id="create team table" duration=790.948µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.389147341Z level=info msg="Executing migration" id="add index team.org_id" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.390210595Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.062604ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.394900622Z level=info msg="Executing migration" id="add unique index team_org_id_name" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.395830443Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=929.211µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.401554883Z level=info msg="Executing migration" id="Add column uid in team" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.405311179Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=3.762976ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.408765907Z level=info msg="Executing migration" id="Update uid column values in team" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.408911391Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=146.364µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.413283271Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.413972196Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=688.835µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.422564131Z level=info msg="Executing migration" id="create team member table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.423169515Z level=info msg="Migration successfully executed" id="create team member table" duration=605.024µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.428857444Z level=info msg="Executing migration" id="add index team_member.org_id" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.430738377Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.887563ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.436302304Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.437239505Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=936.731µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.441741768Z level=info msg="Executing migration" id="add index team_member.team_id" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.442819122Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.082324ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.447001498Z level=info msg="Executing migration" id="Add column email to team table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.454895986Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=7.890279ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.460479973Z level=info msg="Executing migration" id="Add column external to team_member table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.467394971Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=6.920478ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.474311638Z level=info msg="Executing migration" id="Add column permission to team_member table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.478950844Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.638616ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.545044028Z level=info msg="Executing migration" id="create dashboard acl table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.547601516Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=2.560018ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.555539766Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.556671842Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.131886ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.562585037Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.564696055Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=2.109748ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.571513689Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.572543713Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.029223ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.577972016Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.579522132Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.546106ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.595045154Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.5966097Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.564086ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.604600982Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.60629295Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.691768ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.612464762Z level=info msg="Executing migration" id="add index dashboard_permission" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.613559926Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.092614ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.619669755Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.620169666Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=499.351µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.625536018Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.625922837Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=382.339µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.634342519Z level=info msg="Executing migration" id="create tag table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.635421774Z level=info msg="Migration successfully executed" id="create tag table" duration=1.078676ms 23:16:50 kafka | [2024-03-10 23:14:17,507] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:50 kafka | [2024-03-10 23:14:17,510] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:50 kafka | [2024-03-10 23:14:17,521] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 23:16:50 kafka | [2024-03-10 23:14:17,522] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 23:16:50 kafka | [2024-03-10 23:14:17,551] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 23:16:50 kafka | [2024-03-10 23:14:17,583] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1710112457569,1710112457569,1,0,0,72057609689300993,258,0,27 23:16:50 kafka | (kafka.zk.KafkaZkClient) 23:16:50 kafka | [2024-03-10 23:14:17,584] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 23:16:50 kafka | [2024-03-10 23:14:17,659] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 23:16:50 kafka | [2024-03-10 23:14:17,671] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:50 kafka | [2024-03-10 23:14:17,675] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:50 kafka | [2024-03-10 23:14:17,680] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:50 kafka | [2024-03-10 23:14:17,692] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 23:16:50 kafka | [2024-03-10 23:14:17,699] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 23:16:50 kafka | [2024-03-10 23:14:17,717] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 23:16:50 kafka | [2024-03-10 23:14:17,719] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:50 kafka | [2024-03-10 23:14:17,728] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:17,732] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 23:16:50 kafka | [2024-03-10 23:14:17,732] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:50 kafka | [2024-03-10 23:14:17,734] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:17,740] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 23:16:50 kafka | [2024-03-10 23:14:17,771] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:17,772] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 23:16:50 kafka | [2024-03-10 23:14:17,777] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:17,780] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:17,783] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:17,799] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:17,804] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:17,815] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 23:16:50 kafka | [2024-03-10 23:14:17,816] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:50 kafka | [2024-03-10 23:14:17,830] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:17,830] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:17,831] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:17,831] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:17,832] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 23:16:50 kafka | [2024-03-10 23:14:17,835] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:17,835] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:17,836] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:17,837] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 23:16:50 kafka | [2024-03-10 23:14:17,838] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:17,842] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:17,848] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 23:16:50 kafka | [2024-03-10 23:14:17,855] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:50 kafka | [2024-03-10 23:14:17,861] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 23:16:50 kafka | [2024-03-10 23:14:17,864] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.6:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 23:16:50 kafka | [2024-03-10 23:14:17,865] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 23:16:50 kafka | [2024-03-10 23:14:17,866] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:50 kafka | [2024-03-10 23:14:17,866] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) 23:16:50 kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. 23:16:50 kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 23:16:50 kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) 23:16:50 kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) 23:16:50 kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) 23:16:50 kafka | [2024-03-10 23:14:17,871] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) 23:16:50 kafka | [2024-03-10 23:14:17,866] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 23:16:50 kafka | [2024-03-10 23:14:17,874] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 23:16:50 kafka | [2024-03-10 23:14:17,875] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 23:16:50 kafka | [2024-03-10 23:14:17,879] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 23:16:50 kafka | [2024-03-10 23:14:17,879] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:17,886] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:17,886] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:17,887] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:17,887] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:17,888] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:17,889] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 23:16:50 kafka | [2024-03-10 23:14:17,896] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 23:16:50 kafka | [2024-03-10 23:14:17,899] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 23:16:50 kafka | [2024-03-10 23:14:17,901] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:17,910] INFO Kafka version: 7.6.0-ccs (org.apache.kafka.common.utils.AppInfoParser) 23:16:50 kafka | [2024-03-10 23:14:17,911] INFO Kafka commitId: 1991cb733c81d6791626f88253a042b2ec835ab8 (org.apache.kafka.common.utils.AppInfoParser) 23:16:50 kafka | [2024-03-10 23:14:17,911] INFO Kafka startTimeMs: 1710112457904 (org.apache.kafka.common.utils.AppInfoParser) 23:16:50 kafka | [2024-03-10 23:14:17,913] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 23:16:50 kafka | [2024-03-10 23:14:17,976] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 23:16:50 kafka | [2024-03-10 23:14:18,092] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:18,135] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:50 kafka | [2024-03-10 23:14:18,135] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:50 kafka | [2024-03-10 23:14:22,903] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:22,904] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:53,434] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:50 kafka | [2024-03-10 23:14:53,435] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:50 kafka | [2024-03-10 23:14:53,484] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:53,491] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:53,520] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(0AZYR26wR5iORv2DNmvoOw),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:53,521] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:53,526] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,527] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,534] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,534] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,568] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,571] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,572] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,575] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,575] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,575] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,582] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,584] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,588] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(N4B-uQPVQqWvv830YyfZ-Q),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:53,589] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 23:16:50 kafka | [2024-03-10 23:14:53,589] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,589] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,589] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,589] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,589] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,589] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,591] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,592] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,592] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,592] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,592] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,592] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,592] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,592] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,592] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,592] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.64363855Z level=info msg="Executing migration" id="add index tag.key_value" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.645298208Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.665728ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.658609451Z level=info msg="Executing migration" id="create login attempt table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.659216085Z level=info msg="Migration successfully executed" id="create login attempt table" duration=604.373µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.668848804Z level=info msg="Executing migration" id="add index login_attempt.username" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.670044261Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.201777ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.677757457Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.678643966Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=886.669µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.688257736Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.708355013Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=20.074826ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.714718747Z level=info msg="Executing migration" id="create login_attempt v2" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.715485094Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=763.527µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.721080732Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.722874383Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.800112ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.729057953Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.729419151Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=358.198µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.734298233Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.734966098Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=668.895µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.74207453Z level=info msg="Executing migration" id="create user auth table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.743225155Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.151446ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.750960912Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.752921487Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.965826ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.75921461Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.759313142Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=98.742µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.763854065Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.76758348Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=3.730375ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.774891207Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.780665808Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.768452ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.786711195Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.791987395Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.27313ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.797794067Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.806493275Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=8.705468ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.810424084Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.811596881Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.173317ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.821101667Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.826645883Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.544526ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.830539503Z level=info msg="Executing migration" id="create server_lock table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.831163917Z level=info msg="Migration successfully executed" id="create server_lock table" duration=624.444µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.837297656Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.838279638Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=982.242µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.847819385Z level=info msg="Executing migration" id="create user auth token table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.848763567Z level=info msg="Migration successfully executed" id="create user auth token table" duration=943.082µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.852117363Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.853482704Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.364231ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.858412936Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.85947009Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.048154ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.866582971Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.867640816Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.057695ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.872100087Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.877851408Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.748661ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.882411812Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.88365957Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.246998ms 23:16:50 policy-api | Waiting for mariadb port 3306... 23:16:50 policy-api | mariadb (172.17.0.5:3306) open 23:16:50 policy-api | Waiting for policy-db-migrator port 6824... 23:16:50 policy-api | policy-db-migrator (172.17.0.8:6824) open 23:16:50 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 23:16:50 policy-api | 23:16:50 policy-api | . ____ _ __ _ _ 23:16:50 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:50 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:50 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:50 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:50 policy-api | =========|_|==============|___/=/_/_/_/ 23:16:50 policy-api | :: Spring Boot :: (v3.1.8) 23:16:50 policy-api | 23:16:50 policy-api | [2024-03-10T23:14:28.943+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 22 (/app/api.jar started by policy in /opt/app/policy/api/bin) 23:16:50 policy-api | [2024-03-10T23:14:28.945+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:50 policy-api | [2024-03-10T23:14:30.767+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:50 policy-api | [2024-03-10T23:14:30.875+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 95 ms. Found 6 JPA repository interfaces. 23:16:50 policy-api | [2024-03-10T23:14:31.351+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:50 policy-api | [2024-03-10T23:14:31.352+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:50 policy-api | [2024-03-10T23:14:32.096+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:50 policy-api | [2024-03-10T23:14:32.108+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:50 policy-api | [2024-03-10T23:14:32.109+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:50 policy-api | [2024-03-10T23:14:32.110+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 23:16:50 policy-api | [2024-03-10T23:14:32.205+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:50 policy-api | [2024-03-10T23:14:32.205+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3194 ms 23:16:50 policy-api | [2024-03-10T23:14:32.673+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:50 policy-api | [2024-03-10T23:14:32.753+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 23:16:50 policy-api | [2024-03-10T23:14:32.758+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 23:16:50 policy-api | [2024-03-10T23:14:32.807+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:50 policy-api | [2024-03-10T23:14:33.164+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:50 policy-api | [2024-03-10T23:14:33.187+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:50 policy-api | [2024-03-10T23:14:33.322+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2620e717 23:16:50 policy-api | [2024-03-10T23:14:33.324+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:50 policy-api | [2024-03-10T23:14:35.306+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:50 policy-api | [2024-03-10T23:14:35.310+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:50 policy-api | [2024-03-10T23:14:36.371+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 23:16:50 policy-api | [2024-03-10T23:14:37.336+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 23:16:50 policy-api | [2024-03-10T23:14:38.536+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:50 policy-api | [2024-03-10T23:14:38.736+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@63f95ac1, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@14a7e16e, org.springframework.security.web.context.SecurityContextHolderFilter@6e04275e, org.springframework.security.web.header.HeaderWriterFilter@2986e26f, org.springframework.security.web.authentication.logout.LogoutFilter@c9f1951, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@13e6b26c, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@453ef145, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@3edd135d, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@59171a5d, org.springframework.security.web.access.ExceptionTranslationFilter@19a7e618, org.springframework.security.web.access.intercept.AuthorizationFilter@d181ca3] 23:16:50 policy-api | [2024-03-10T23:14:39.643+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:50 policy-api | [2024-03-10T23:14:39.738+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:50 policy-api | [2024-03-10T23:14:39.781+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 23:16:50 policy-api | [2024-03-10T23:14:39.797+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.595 seconds (process running for 12.255) 23:16:50 policy-api | [2024-03-10T23:14:39.965+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-1] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:50 policy-api | [2024-03-10T23:14:39.965+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Initializing Servlet 'dispatcherServlet' 23:16:50 policy-api | [2024-03-10T23:14:39.967+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Completed initialization in 1 ms 23:16:50 policy-api | [2024-03-10T23:14:56.547+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: 23:16:50 policy-api | [] 23:16:50 policy-apex-pdp | Waiting for mariadb port 3306... 23:16:50 policy-apex-pdp | mariadb (172.17.0.5:3306) open 23:16:50 policy-apex-pdp | Waiting for kafka port 9092... 23:16:50 policy-apex-pdp | kafka (172.17.0.6:9092) open 23:16:50 policy-apex-pdp | Waiting for pap port 6969... 23:16:50 policy-apex-pdp | pap (172.17.0.10:6969) open 23:16:50 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.356+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.544+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:50 policy-apex-pdp | allow.auto.create.topics = true 23:16:50 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:50 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:50 policy-apex-pdp | auto.offset.reset = latest 23:16:50 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:50 policy-apex-pdp | check.crcs = true 23:16:50 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:50 policy-apex-pdp | client.id = consumer-cd2396fb-4c66-4451-a067-57142bc9537e-1 23:16:50 policy-apex-pdp | client.rack = 23:16:50 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:50 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:50 policy-apex-pdp | enable.auto.commit = true 23:16:50 policy-apex-pdp | exclude.internal.topics = true 23:16:50 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:50 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:50 policy-apex-pdp | fetch.min.bytes = 1 23:16:50 policy-apex-pdp | group.id = cd2396fb-4c66-4451-a067-57142bc9537e 23:16:50 policy-apex-pdp | group.instance.id = null 23:16:50 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:50 policy-apex-pdp | interceptor.classes = [] 23:16:50 policy-apex-pdp | internal.leave.group.on.close = true 23:16:50 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:50 policy-apex-pdp | isolation.level = read_uncommitted 23:16:50 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:50 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:50 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:50 policy-apex-pdp | max.poll.records = 500 23:16:50 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:50 policy-apex-pdp | metric.reporters = [] 23:16:50 policy-apex-pdp | metrics.num.samples = 2 23:16:50 policy-apex-pdp | metrics.recording.level = INFO 23:16:50 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:50 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:50 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:50 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:50 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:50 policy-apex-pdp | request.timeout.ms = 30000 23:16:50 policy-apex-pdp | retry.backoff.ms = 100 23:16:50 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:50 policy-apex-pdp | sasl.jaas.config = null 23:16:50 kafka | [2024-03-10 23:14:53,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,596] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,614] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,622] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) 23:16:50 kafka | [2024-03-10 23:14:53,623] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,696] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 kafka | [2024-03-10 23:14:53,709] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 23:16:50 kafka | [2024-03-10 23:14:53,715] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 23:16:50 kafka | [2024-03-10 23:14:53,717] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 kafka | [2024-03-10 23:14:53,720] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(0AZYR26wR5iORv2DNmvoOw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,732] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,732] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:50 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:50 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:50 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:50 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:50 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:50 policy-apex-pdp | sasl.login.class = null 23:16:50 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:50 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:50 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:50 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:50 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:50 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:50 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:50 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:50 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:50 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:50 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:50 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:50 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:50 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:50 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:50 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:50 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:50 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:50 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:50 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:50 policy-apex-pdp | security.providers = null 23:16:50 policy-apex-pdp | send.buffer.bytes = 131072 23:16:50 policy-apex-pdp | session.timeout.ms = 45000 23:16:50 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:50 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:50 policy-apex-pdp | ssl.cipher.suites = null 23:16:50 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:50 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:50 policy-apex-pdp | ssl.engine.factory.class = null 23:16:50 policy-apex-pdp | ssl.key.password = null 23:16:50 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:50 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:50 policy-apex-pdp | ssl.keystore.key = null 23:16:50 policy-apex-pdp | ssl.keystore.location = null 23:16:50 policy-apex-pdp | ssl.keystore.password = null 23:16:50 policy-apex-pdp | ssl.keystore.type = JKS 23:16:50 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:50 policy-apex-pdp | ssl.provider = null 23:16:50 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:50 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:50 policy-apex-pdp | ssl.truststore.certificates = null 23:16:50 policy-apex-pdp | ssl.truststore.location = null 23:16:50 policy-apex-pdp | ssl.truststore.password = null 23:16:50 policy-apex-pdp | ssl.truststore.type = JKS 23:16:50 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:50 policy-apex-pdp | 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.713+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.714+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.714+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710112494712 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.716+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-cd2396fb-4c66-4451-a067-57142bc9537e-1, groupId=cd2396fb-4c66-4451-a067-57142bc9537e] Subscribed to topic(s): policy-pdp-pap 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.728+00:00|INFO|ServiceManager|main] service manager starting 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.728+00:00|INFO|ServiceManager|main] service manager starting topics 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.732+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=cd2396fb-4c66-4451-a067-57142bc9537e, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.752+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:50 policy-apex-pdp | allow.auto.create.topics = true 23:16:50 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:50 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:50 policy-apex-pdp | auto.offset.reset = latest 23:16:50 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:50 policy-apex-pdp | check.crcs = true 23:16:50 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:50 policy-apex-pdp | client.id = consumer-cd2396fb-4c66-4451-a067-57142bc9537e-2 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.945321883Z level=info msg="Executing migration" id="create cache_data table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.947482602Z level=info msg="Migration successfully executed" id="create cache_data table" duration=2.16035ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.952324123Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.95355846Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.236148ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.960062429Z level=info msg="Executing migration" id="create short_url table v1" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.961491311Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.428812ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.97026988Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.972622084Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=2.350404ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.978017287Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.978242092Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=225.105µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.985703102Z level=info msg="Executing migration" id="delete alert_definition table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.985953087Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=249.545µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.991263198Z level=info msg="Executing migration" id="recreate alert_definition table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.992406784Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.142726ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.998666677Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:14.999799753Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.132726ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.00811957Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.01020029Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=2.07296ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.017950013Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.018164807Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=201.494µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.022382555Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.023460915Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.07895ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.032636515Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.034172703Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.536138ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.039858058Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.041535379Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.677301ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.047051191Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.048146341Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.09478ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.055858184Z level=info msg="Executing migration" id="Add column paused in alert_definition" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.064915911Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=9.058087ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.070202599Z level=info msg="Executing migration" id="drop alert_definition table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.07135513Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.150371ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.078206947Z level=info msg="Executing migration" id="delete alert_definition_version table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.07838421Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=176.543µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.082766972Z level=info msg="Executing migration" id="recreate alert_definition_version table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.08379934Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.031518ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.088678031Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.090377621Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.69894ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.098802457Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.099905848Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.102781ms 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.105167085Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.105458631Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=292.016µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.114926745Z level=info msg="Executing migration" id="drop alert_definition_version table" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.116649448Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.719812ms 23:16:50 kafka | [2024-03-10 23:14:53,732] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,732] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,732] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,732] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,732] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,732] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,732] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 policy-apex-pdp | client.rack = 23:16:50 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:50 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:50 policy-apex-pdp | enable.auto.commit = true 23:16:50 policy-apex-pdp | exclude.internal.topics = true 23:16:50 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:50 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:50 policy-apex-pdp | fetch.min.bytes = 1 23:16:50 policy-apex-pdp | group.id = cd2396fb-4c66-4451-a067-57142bc9537e 23:16:50 policy-apex-pdp | group.instance.id = null 23:16:50 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:50 policy-apex-pdp | interceptor.classes = [] 23:16:50 policy-apex-pdp | internal.leave.group.on.close = true 23:16:50 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:50 policy-apex-pdp | isolation.level = read_uncommitted 23:16:50 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:50 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:50 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:50 policy-apex-pdp | max.poll.records = 500 23:16:50 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:50 policy-apex-pdp | metric.reporters = [] 23:16:50 policy-apex-pdp | metrics.num.samples = 2 23:16:50 policy-apex-pdp | metrics.recording.level = INFO 23:16:50 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:50 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:50 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:50 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:50 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:50 policy-apex-pdp | request.timeout.ms = 30000 23:16:50 policy-apex-pdp | retry.backoff.ms = 100 23:16:50 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:50 policy-apex-pdp | sasl.jaas.config = null 23:16:50 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:50 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:50 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:50 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:50 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:50 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:50 policy-apex-pdp | sasl.login.class = null 23:16:50 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:50 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:50 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:50 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:50 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:50 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:50 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:50 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:50 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:50 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:50 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:50 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:50 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:50 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:50 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:50 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:50 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:50 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:50 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:50 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:50 policy-apex-pdp | security.providers = null 23:16:50 policy-apex-pdp | send.buffer.bytes = 131072 23:16:50 policy-apex-pdp | session.timeout.ms = 45000 23:16:50 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:50 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:50 policy-apex-pdp | ssl.cipher.suites = null 23:16:50 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:50 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:50 policy-apex-pdp | ssl.engine.factory.class = null 23:16:50 policy-apex-pdp | ssl.key.password = null 23:16:50 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:50 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:50 policy-apex-pdp | ssl.keystore.key = null 23:16:50 policy-apex-pdp | ssl.keystore.location = null 23:16:50 policy-apex-pdp | ssl.keystore.password = null 23:16:50 policy-apex-pdp | ssl.keystore.type = JKS 23:16:50 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:50 policy-apex-pdp | ssl.provider = null 23:16:50 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:50 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:50 policy-apex-pdp | ssl.truststore.certificates = null 23:16:50 policy-apex-pdp | ssl.truststore.location = null 23:16:50 policy-apex-pdp | ssl.truststore.password = null 23:16:50 policy-apex-pdp | ssl.truststore.type = JKS 23:16:50 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:50 policy-apex-pdp | 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.760+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.761+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.761+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710112494760 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.761+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-cd2396fb-4c66-4451-a067-57142bc9537e-2, groupId=cd2396fb-4c66-4451-a067-57142bc9537e] Subscribed to topic(s): policy-pdp-pap 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.762+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=16af55e6-b7a5-47d9-add5-79d98da5a3e8, alive=false, publisher=null]]: starting 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.778+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:50 policy-apex-pdp | acks = -1 23:16:50 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:50 policy-apex-pdp | batch.size = 16384 23:16:50 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:50 policy-apex-pdp | buffer.memory = 33554432 23:16:50 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:50 policy-apex-pdp | client.id = producer-1 23:16:50 policy-apex-pdp | compression.type = none 23:16:50 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:50 policy-apex-pdp | delivery.timeout.ms = 120000 23:16:50 policy-apex-pdp | enable.idempotence = true 23:16:50 policy-apex-pdp | interceptor.classes = [] 23:16:50 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:50 policy-apex-pdp | linger.ms = 0 23:16:50 policy-apex-pdp | max.block.ms = 60000 23:16:50 policy-apex-pdp | max.in.flight.requests.per.connection = 5 23:16:50 policy-apex-pdp | max.request.size = 1048576 23:16:50 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:50 policy-apex-pdp | metadata.max.idle.ms = 300000 23:16:50 policy-apex-pdp | metric.reporters = [] 23:16:50 policy-apex-pdp | metrics.num.samples = 2 23:16:50 policy-apex-pdp | metrics.recording.level = INFO 23:16:50 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:50 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 23:16:50 policy-apex-pdp | partitioner.availability.timeout.ms = 0 23:16:50 policy-apex-pdp | partitioner.class = null 23:16:50 policy-apex-pdp | partitioner.ignore.keys = false 23:16:50 policy-apex-pdp | receive.buffer.bytes = 32768 23:16:50 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:50 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:50 policy-apex-pdp | request.timeout.ms = 30000 23:16:50 policy-apex-pdp | retries = 2147483647 23:16:50 policy-apex-pdp | retry.backoff.ms = 100 23:16:50 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:50 policy-apex-pdp | sasl.jaas.config = null 23:16:50 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:50 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:50 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:50 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:50 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:50 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:50 policy-apex-pdp | sasl.login.class = null 23:16:50 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:50 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:50 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:50 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:50 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:50 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:50 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:50 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:50 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:50 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:50 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:50 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:50 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:50 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:50 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:50 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:50 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:50 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:50 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:50 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:50 policy-apex-pdp | security.providers = null 23:16:50 policy-apex-pdp | send.buffer.bytes = 131072 23:16:50 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:50 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:50 policy-apex-pdp | ssl.cipher.suites = null 23:16:50 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:50 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:50 policy-apex-pdp | ssl.engine.factory.class = null 23:16:50 policy-apex-pdp | ssl.key.password = null 23:16:50 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:50 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:50 policy-apex-pdp | ssl.keystore.key = null 23:16:50 policy-apex-pdp | ssl.keystore.location = null 23:16:50 policy-apex-pdp | ssl.keystore.password = null 23:16:50 policy-apex-pdp | ssl.keystore.type = JKS 23:16:50 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:50 policy-apex-pdp | ssl.provider = null 23:16:50 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:50 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:50 policy-apex-pdp | ssl.truststore.certificates = null 23:16:50 policy-apex-pdp | ssl.truststore.location = null 23:16:50 policy-apex-pdp | ssl.truststore.password = null 23:16:50 policy-apex-pdp | ssl.truststore.type = JKS 23:16:50 policy-apex-pdp | transaction.timeout.ms = 60000 23:16:50 policy-apex-pdp | transactional.id = null 23:16:50 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:50 policy-apex-pdp | 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.789+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.808+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.808+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.808+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710112494808 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.808+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=16af55e6-b7a5-47d9-add5-79d98da5a3e8, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.808+00:00|INFO|ServiceManager|main] service manager starting set alive 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.809+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.811+00:00|INFO|ServiceManager|main] service manager starting topic sinks 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.811+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.814+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.814+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.814+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.814+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=cd2396fb-4c66-4451-a067-57142bc9537e, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@e077866 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.815+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=cd2396fb-4c66-4451-a067-57142bc9537e, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.815+00:00|INFO|ServiceManager|main] service manager starting Create REST server 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.830+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 23:16:50 policy-apex-pdp | [] 23:16:50 policy-apex-pdp | [2024-03-10T23:14:54.833+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:50 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"132a6f29-727c-4184-ae9f-3b3ba0a9bc77","timestampMs":1710112494814,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup"} 23:16:50 policy-apex-pdp | [2024-03-10T23:14:55.015+00:00|INFO|ServiceManager|main] service manager starting Rest Server 23:16:50 policy-apex-pdp | [2024-03-10T23:14:55.016+00:00|INFO|ServiceManager|main] service manager starting 23:16:50 policy-apex-pdp | [2024-03-10T23:14:55.016+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 23:16:50 policy-apex-pdp | [2024-03-10T23:14:55.016+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:50 policy-apex-pdp | [2024-03-10T23:14:55.027+00:00|INFO|ServiceManager|main] service manager started 23:16:50 policy-apex-pdp | [2024-03-10T23:14:55.027+00:00|INFO|ServiceManager|main] service manager started 23:16:50 policy-apex-pdp | [2024-03-10T23:14:55.027+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 23:16:50 policy-apex-pdp | [2024-03-10T23:14:55.027+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:50 policy-apex-pdp | [2024-03-10T23:14:55.184+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd2396fb-4c66-4451-a067-57142bc9537e-2, groupId=cd2396fb-4c66-4451-a067-57142bc9537e] Cluster ID: dVKmUcACQYWhG0JC5XUpMQ 23:16:50 policy-apex-pdp | [2024-03-10T23:14:55.184+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: dVKmUcACQYWhG0JC5XUpMQ 23:16:50 policy-apex-pdp | [2024-03-10T23:14:55.186+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd2396fb-4c66-4451-a067-57142bc9537e-2, groupId=cd2396fb-4c66-4451-a067-57142bc9537e] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:50 policy-apex-pdp | [2024-03-10T23:14:55.187+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 23:16:50 policy-apex-pdp | [2024-03-10T23:14:55.193+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd2396fb-4c66-4451-a067-57142bc9537e-2, groupId=cd2396fb-4c66-4451-a067-57142bc9537e] (Re-)joining group 23:16:50 policy-apex-pdp | [2024-03-10T23:14:55.210+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd2396fb-4c66-4451-a067-57142bc9537e-2, groupId=cd2396fb-4c66-4451-a067-57142bc9537e] Request joining group due to: need to re-join with the given member-id: consumer-cd2396fb-4c66-4451-a067-57142bc9537e-2-a841a5cf-26c3-4eef-82c0-f364852bf17c 23:16:50 policy-apex-pdp | [2024-03-10T23:14:55.210+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd2396fb-4c66-4451-a067-57142bc9537e-2, groupId=cd2396fb-4c66-4451-a067-57142bc9537e] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:50 policy-apex-pdp | [2024-03-10T23:14:55.210+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd2396fb-4c66-4451-a067-57142bc9537e-2, groupId=cd2396fb-4c66-4451-a067-57142bc9537e] (Re-)joining group 23:16:50 policy-apex-pdp | [2024-03-10T23:14:55.692+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 23:16:50 policy-apex-pdp | [2024-03-10T23:14:55.692+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 23:16:50 policy-apex-pdp | [2024-03-10T23:14:56.165+00:00|INFO|RequestLog|qtp1068445309-30] 172.17.0.4 - policyadmin [10/Mar/2024:23:14:56 +0000] "GET /metrics HTTP/1.1" 200 10642 "-" "Prometheus/2.50.1" 23:16:50 policy-apex-pdp | [2024-03-10T23:14:58.216+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd2396fb-4c66-4451-a067-57142bc9537e-2, groupId=cd2396fb-4c66-4451-a067-57142bc9537e] Successfully joined group with generation Generation{generationId=1, memberId='consumer-cd2396fb-4c66-4451-a067-57142bc9537e-2-a841a5cf-26c3-4eef-82c0-f364852bf17c', protocol='range'} 23:16:50 policy-apex-pdp | [2024-03-10T23:14:58.224+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd2396fb-4c66-4451-a067-57142bc9537e-2, groupId=cd2396fb-4c66-4451-a067-57142bc9537e] Finished assignment for group at generation 1: {consumer-cd2396fb-4c66-4451-a067-57142bc9537e-2-a841a5cf-26c3-4eef-82c0-f364852bf17c=Assignment(partitions=[policy-pdp-pap-0])} 23:16:50 policy-apex-pdp | [2024-03-10T23:14:58.233+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd2396fb-4c66-4451-a067-57142bc9537e-2, groupId=cd2396fb-4c66-4451-a067-57142bc9537e] Successfully synced group in generation Generation{generationId=1, memberId='consumer-cd2396fb-4c66-4451-a067-57142bc9537e-2-a841a5cf-26c3-4eef-82c0-f364852bf17c', protocol='range'} 23:16:50 policy-apex-pdp | [2024-03-10T23:14:58.233+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd2396fb-4c66-4451-a067-57142bc9537e-2, groupId=cd2396fb-4c66-4451-a067-57142bc9537e] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:50 policy-apex-pdp | [2024-03-10T23:14:58.236+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd2396fb-4c66-4451-a067-57142bc9537e-2, groupId=cd2396fb-4c66-4451-a067-57142bc9537e] Adding newly assigned partitions: policy-pdp-pap-0 23:16:50 policy-apex-pdp | [2024-03-10T23:14:58.245+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd2396fb-4c66-4451-a067-57142bc9537e-2, groupId=cd2396fb-4c66-4451-a067-57142bc9537e] Found no committed offset for partition policy-pdp-pap-0 23:16:50 policy-apex-pdp | [2024-03-10T23:14:58.253+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd2396fb-4c66-4451-a067-57142bc9537e-2, groupId=cd2396fb-4c66-4451-a067-57142bc9537e] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:50 policy-apex-pdp | [2024-03-10T23:15:14.815+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:50 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"eee9c567-b75b-41ed-b746-919bec03d1e2","timestampMs":1710112514815,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup"} 23:16:50 policy-apex-pdp | [2024-03-10T23:15:14.842+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:50 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"eee9c567-b75b-41ed-b746-919bec03d1e2","timestampMs":1710112514815,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup"} 23:16:50 policy-apex-pdp | [2024-03-10T23:15:14.844+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:50 policy-apex-pdp | [2024-03-10T23:15:15.009+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:50 policy-apex-pdp | {"source":"pap-004a69b5-0086-4406-9a7b-3ee3aaedaa3f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"ceb9152d-bcfc-4b6d-8a99-09a6db2bceb9","timestampMs":1710112514941,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:50 policy-apex-pdp | [2024-03-10T23:15:15.017+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 23:16:50 policy-apex-pdp | [2024-03-10T23:15:15.017+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 23:16:50 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ce266375-bc26-4441-9602-22e77eadb97f","timestampMs":1710112515017,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup"} 23:16:50 policy-apex-pdp | [2024-03-10T23:15:15.018+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:50 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"ceb9152d-bcfc-4b6d-8a99-09a6db2bceb9","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"7a0ef3bf-5a26-46e2-90b4-a5c72084a05e","timestampMs":1710112515018,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:50 policy-apex-pdp | [2024-03-10T23:15:15.033+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:50 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ce266375-bc26-4441-9602-22e77eadb97f","timestampMs":1710112515017,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup"} 23:16:50 policy-apex-pdp | [2024-03-10T23:15:15.035+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:50 policy-apex-pdp | [2024-03-10T23:15:15.036+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:50 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"ceb9152d-bcfc-4b6d-8a99-09a6db2bceb9","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"7a0ef3bf-5a26-46e2-90b4-a5c72084a05e","timestampMs":1710112515018,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:50 policy-apex-pdp | [2024-03-10T23:15:15.036+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:50 policy-apex-pdp | [2024-03-10T23:15:15.096+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:50 policy-apex-pdp | {"source":"pap-004a69b5-0086-4406-9a7b-3ee3aaedaa3f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"0b57fe72-7bb8-4bc8-9c3f-e3cf4f3a8e64","timestampMs":1710112514942,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:50 policy-apex-pdp | [2024-03-10T23:15:15.100+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:50 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"0b57fe72-7bb8-4bc8-9c3f-e3cf4f3a8e64","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"cfe006df-6f88-4cb5-babd-358e5ffa270f","timestampMs":1710112515100,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:50 policy-apex-pdp | [2024-03-10T23:15:15.114+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:50 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"0b57fe72-7bb8-4bc8-9c3f-e3cf4f3a8e64","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"cfe006df-6f88-4cb5-babd-358e5ffa270f","timestampMs":1710112515100,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:50 policy-apex-pdp | [2024-03-10T23:15:15.118+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:50 policy-apex-pdp | [2024-03-10T23:15:15.143+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:50 policy-apex-pdp | {"source":"pap-004a69b5-0086-4406-9a7b-3ee3aaedaa3f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"bd8174cb-de3f-4b45-85fe-9bded866eeba","timestampMs":1710112515115,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:50 policy-apex-pdp | [2024-03-10T23:15:15.145+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:50 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"bd8174cb-de3f-4b45-85fe-9bded866eeba","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"893be548-c876-4831-b932-0692ae4fde57","timestampMs":1710112515144,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:50 policy-apex-pdp | [2024-03-10T23:15:15.160+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:50 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"bd8174cb-de3f-4b45-85fe-9bded866eeba","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"893be548-c876-4831-b932-0692ae4fde57","timestampMs":1710112515144,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:50 policy-apex-pdp | [2024-03-10T23:15:15.160+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:50 policy-apex-pdp | [2024-03-10T23:15:56.107+00:00|INFO|RequestLog|qtp1068445309-28] 172.17.0.4 - policyadmin [10/Mar/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10642 "-" "Prometheus/2.50.1" 23:16:50 kafka | [2024-03-10 23:14:53,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,733] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,734] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,734] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,734] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,734] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,734] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,734] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,734] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,734] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,734] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,734] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,734] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,734] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,734] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,734] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,734] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,734] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,734] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,734] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:53,734] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:50 policy-db-migrator | Waiting for mariadb port 3306... 23:16:50 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 23:16:50 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 23:16:50 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 23:16:50 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 23:16:50 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 23:16:50 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 23:16:50 policy-db-migrator | Connection to mariadb (172.17.0.5) 3306 port [tcp/mysql] succeeded! 23:16:50 policy-db-migrator | 321 blocks 23:16:50 policy-db-migrator | Preparing upgrade release version: 0800 23:16:50 policy-db-migrator | Preparing upgrade release version: 0900 23:16:50 policy-db-migrator | Preparing upgrade release version: 1000 23:16:50 policy-db-migrator | Preparing upgrade release version: 1100 23:16:50 policy-db-migrator | Preparing upgrade release version: 1200 23:16:50 policy-db-migrator | Preparing upgrade release version: 1300 23:16:50 policy-db-migrator | Done 23:16:50 policy-db-migrator | name version 23:16:50 policy-db-migrator | policyadmin 0 23:16:50 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 23:16:50 policy-db-migrator | upgrade: 0 -> 1300 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | Waiting for mariadb port 3306... 23:16:50 policy-pap | mariadb (172.17.0.5:3306) open 23:16:50 policy-pap | Waiting for kafka port 9092... 23:16:50 policy-pap | kafka (172.17.0.6:9092) open 23:16:50 policy-pap | Waiting for api port 6969... 23:16:50 policy-pap | api (172.17.0.9:6969) open 23:16:50 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 23:16:50 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 23:16:50 policy-pap | 23:16:50 policy-pap | . ____ _ __ _ _ 23:16:50 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:50 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:50 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:50 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:50 policy-pap | =========|_|==============|___/=/_/_/_/ 23:16:50 policy-pap | :: Spring Boot :: (v3.1.8) 23:16:50 policy-pap | 23:16:50 policy-pap | [2024-03-10T23:14:42.770+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 36 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 23:16:50 policy-pap | [2024-03-10T23:14:42.772+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:50 policy-pap | [2024-03-10T23:14:44.681+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:50 policy-pap | [2024-03-10T23:14:44.783+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 93 ms. Found 7 JPA repository interfaces. 23:16:50 policy-pap | [2024-03-10T23:14:45.199+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:50 policy-pap | [2024-03-10T23:14:45.199+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:50 policy-pap | [2024-03-10T23:14:45.929+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:50 policy-pap | [2024-03-10T23:14:45.940+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:50 policy-pap | [2024-03-10T23:14:45.943+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:50 policy-pap | [2024-03-10T23:14:45.943+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 23:16:50 policy-pap | [2024-03-10T23:14:46.051+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:50 policy-pap | [2024-03-10T23:14:46.052+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3192 ms 23:16:50 policy-pap | [2024-03-10T23:14:46.520+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:50 policy-pap | [2024-03-10T23:14:46.623+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 23:16:50 policy-pap | [2024-03-10T23:14:46.626+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 23:16:50 policy-pap | [2024-03-10T23:14:46.677+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:50 policy-pap | [2024-03-10T23:14:47.038+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:50 policy-pap | [2024-03-10T23:14:47.062+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:50 policy-pap | [2024-03-10T23:14:47.183+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2def7a7a 23:16:50 policy-pap | [2024-03-10T23:14:47.185+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 23:16:50 policy-pap | [2024-03-10T23:14:49.288+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:50 policy-pap | [2024-03-10T23:14:49.292+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:50 policy-pap | [2024-03-10T23:14:49.837+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 23:16:50 policy-pap | [2024-03-10T23:14:50.329+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 23:16:50 policy-pap | [2024-03-10T23:14:50.479+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 23:16:50 policy-pap | [2024-03-10T23:14:50.765+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:50 policy-pap | allow.auto.create.topics = true 23:16:50 policy-pap | auto.commit.interval.ms = 5000 23:16:50 policy-pap | auto.include.jmx.reporter = true 23:16:50 policy-pap | auto.offset.reset = latest 23:16:50 policy-pap | bootstrap.servers = [kafka:9092] 23:16:50 policy-pap | check.crcs = true 23:16:50 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:50 policy-pap | client.id = consumer-688a4207-1e5e-4290-924d-17e6019295ad-1 23:16:50 policy-pap | client.rack = 23:16:50 policy-pap | connections.max.idle.ms = 540000 23:16:50 policy-pap | default.api.timeout.ms = 60000 23:16:50 policy-pap | enable.auto.commit = true 23:16:50 policy-pap | exclude.internal.topics = true 23:16:50 policy-pap | fetch.max.bytes = 52428800 23:16:50 policy-pap | fetch.max.wait.ms = 500 23:16:50 policy-pap | fetch.min.bytes = 1 23:16:50 policy-pap | group.id = 688a4207-1e5e-4290-924d-17e6019295ad 23:16:50 policy-pap | group.instance.id = null 23:16:50 policy-pap | heartbeat.interval.ms = 3000 23:16:50 policy-pap | interceptor.classes = [] 23:16:50 policy-pap | internal.leave.group.on.close = true 23:16:50 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:50 policy-pap | isolation.level = read_uncommitted 23:16:50 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:50 policy-pap | max.partition.fetch.bytes = 1048576 23:16:50 policy-pap | max.poll.interval.ms = 300000 23:16:50 policy-pap | max.poll.records = 500 23:16:50 policy-pap | metadata.max.age.ms = 300000 23:16:50 policy-pap | metric.reporters = [] 23:16:50 policy-pap | metrics.num.samples = 2 23:16:50 policy-pap | metrics.recording.level = INFO 23:16:50 policy-pap | metrics.sample.window.ms = 30000 23:16:50 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:50 policy-pap | receive.buffer.bytes = 65536 23:16:50 policy-pap | reconnect.backoff.max.ms = 1000 23:16:50 policy-pap | reconnect.backoff.ms = 50 23:16:50 policy-pap | request.timeout.ms = 30000 23:16:50 policy-pap | retry.backoff.ms = 100 23:16:50 policy-pap | sasl.client.callback.handler.class = null 23:16:50 policy-pap | sasl.jaas.config = null 23:16:50 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:50 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:50 policy-pap | sasl.kerberos.service.name = null 23:16:50 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:50 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:50 policy-pap | sasl.login.callback.handler.class = null 23:16:50 policy-pap | sasl.login.class = null 23:16:50 policy-pap | sasl.login.connect.timeout.ms = null 23:16:50 policy-pap | sasl.login.read.timeout.ms = null 23:16:50 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:50 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:50 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:50 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:50 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:50 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | sasl.mechanism = GSSAPI 23:16:50 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:50 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:50 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:50 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:50 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:50 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:50 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:50 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:50 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:50 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:50 policy-pap | security.protocol = PLAINTEXT 23:16:50 policy-pap | security.providers = null 23:16:50 policy-pap | send.buffer.bytes = 131072 23:16:50 policy-pap | session.timeout.ms = 45000 23:16:50 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:50 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:50 policy-pap | ssl.cipher.suites = null 23:16:50 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:50 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:50 policy-pap | ssl.engine.factory.class = null 23:16:50 policy-pap | ssl.key.password = null 23:16:50 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:50 policy-pap | ssl.keystore.certificate.chain = null 23:16:50 policy-pap | ssl.keystore.key = null 23:16:50 policy-pap | ssl.keystore.location = null 23:16:50 policy-pap | ssl.keystore.password = null 23:16:50 policy-pap | ssl.keystore.type = JKS 23:16:50 policy-pap | ssl.protocol = TLSv1.3 23:16:50 policy-pap | ssl.provider = null 23:16:50 policy-pap | ssl.secure.random.implementation = null 23:16:50 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:50 policy-pap | ssl.truststore.certificates = null 23:16:50 policy-pap | ssl.truststore.location = null 23:16:50 policy-pap | ssl.truststore.password = null 23:16:50 policy-pap | ssl.truststore.type = JKS 23:16:50 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:50 policy-pap | 23:16:50 policy-pap | [2024-03-10T23:14:50.955+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:50 policy-pap | [2024-03-10T23:14:50.955+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:50 policy-pap | [2024-03-10T23:14:50.955+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710112490953 23:16:50 policy-pap | [2024-03-10T23:14:50.957+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-688a4207-1e5e-4290-924d-17e6019295ad-1, groupId=688a4207-1e5e-4290-924d-17e6019295ad] Subscribed to topic(s): policy-pdp-pap 23:16:50 policy-pap | [2024-03-10T23:14:50.958+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:50 policy-pap | allow.auto.create.topics = true 23:16:50 policy-pap | auto.commit.interval.ms = 5000 23:16:50 policy-pap | auto.include.jmx.reporter = true 23:16:50 policy-pap | auto.offset.reset = latest 23:16:50 policy-pap | bootstrap.servers = [kafka:9092] 23:16:50 policy-pap | check.crcs = true 23:16:50 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:50 policy-pap | client.id = consumer-policy-pap-2 23:16:50 policy-pap | client.rack = 23:16:50 policy-pap | connections.max.idle.ms = 540000 23:16:50 policy-pap | default.api.timeout.ms = 60000 23:16:50 policy-pap | enable.auto.commit = true 23:16:50 policy-pap | exclude.internal.topics = true 23:16:50 policy-pap | fetch.max.bytes = 52428800 23:16:50 policy-pap | fetch.max.wait.ms = 500 23:16:50 policy-pap | fetch.min.bytes = 1 23:16:50 policy-pap | group.id = policy-pap 23:16:50 policy-pap | group.instance.id = null 23:16:50 policy-pap | heartbeat.interval.ms = 3000 23:16:50 policy-pap | interceptor.classes = [] 23:16:50 policy-pap | internal.leave.group.on.close = true 23:16:50 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0450-pdpgroup.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0470-pdp.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | isolation.level = read_uncommitted 23:16:50 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:50 policy-pap | max.partition.fetch.bytes = 1048576 23:16:50 policy-pap | max.poll.interval.ms = 300000 23:16:50 policy-pap | max.poll.records = 500 23:16:50 policy-pap | metadata.max.age.ms = 300000 23:16:50 policy-pap | metric.reporters = [] 23:16:50 policy-pap | metrics.num.samples = 2 23:16:50 policy-pap | metrics.recording.level = INFO 23:16:50 policy-pap | metrics.sample.window.ms = 30000 23:16:50 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:50 policy-pap | receive.buffer.bytes = 65536 23:16:50 policy-pap | reconnect.backoff.max.ms = 1000 23:16:50 policy-pap | reconnect.backoff.ms = 50 23:16:50 policy-pap | request.timeout.ms = 30000 23:16:50 policy-pap | retry.backoff.ms = 100 23:16:50 policy-pap | sasl.client.callback.handler.class = null 23:16:50 policy-pap | sasl.jaas.config = null 23:16:50 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:50 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:50 policy-pap | sasl.kerberos.service.name = null 23:16:50 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:50 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:50 policy-pap | sasl.login.callback.handler.class = null 23:16:50 policy-pap | sasl.login.class = null 23:16:50 policy-pap | sasl.login.connect.timeout.ms = null 23:16:50 policy-pap | sasl.login.read.timeout.ms = null 23:16:50 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:50 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:50 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:50 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:50 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:50 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:50 policy-pap | sasl.mechanism = GSSAPI 23:16:50 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:50 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:50 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:50 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:50 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:50 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:50 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:50 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:50 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:50 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:50 policy-pap | security.protocol = PLAINTEXT 23:16:50 policy-pap | security.providers = null 23:16:50 policy-pap | send.buffer.bytes = 131072 23:16:50 policy-pap | session.timeout.ms = 45000 23:16:50 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:50 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:50 policy-pap | ssl.cipher.suites = null 23:16:50 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:50 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:50 policy-pap | ssl.engine.factory.class = null 23:16:50 policy-pap | ssl.key.password = null 23:16:50 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:50 policy-pap | ssl.keystore.certificate.chain = null 23:16:50 policy-pap | ssl.keystore.key = null 23:16:50 policy-pap | ssl.keystore.location = null 23:16:50 policy-pap | ssl.keystore.password = null 23:16:50 policy-pap | ssl.keystore.type = JKS 23:16:50 policy-pap | ssl.protocol = TLSv1.3 23:16:50 policy-pap | ssl.provider = null 23:16:50 policy-pap | ssl.secure.random.implementation = null 23:16:50 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:50 policy-pap | ssl.truststore.certificates = null 23:16:50 policy-pap | ssl.truststore.location = null 23:16:50 policy-pap | ssl.truststore.password = null 23:16:50 policy-pap | ssl.truststore.type = JKS 23:16:50 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:50 policy-pap | 23:16:50 policy-pap | [2024-03-10T23:14:50.964+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:50 policy-pap | [2024-03-10T23:14:50.964+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0570-toscadatatype.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | 23:16:50 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 23:16:50 policy-pap | [2024-03-10T23:14:50.964+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710112490964 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.125729895Z level=info msg="Executing migration" id="create alert_instance table" 23:16:50 kafka | [2024-03-10 23:14:53,734] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | [2024-03-10T23:14:50.964+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.126905396Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.176181ms 23:16:50 kafka | [2024-03-10 23:14:53,734] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 23:16:50 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:50 policy-db-migrator | 23:16:50 policy-pap | [2024-03-10T23:14:51.311+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 23:16:50 prometheus | ts=2024-03-10T23:14:11.512Z caller=main.go:564 level=info msg="No time or size retention was set so using the default time retention" duration=15d 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.132108683Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 23:16:50 kafka | [2024-03-10 23:14:53,734] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 23:16:50 simulator | overriding logback.xml 23:16:50 policy-db-migrator | 23:16:50 policy-pap | [2024-03-10T23:14:51.473+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:50 prometheus | ts=2024-03-10T23:14:11.513Z caller=main.go:608 level=info msg="Starting Prometheus Server" mode=server version="(version=2.50.1, branch=HEAD, revision=8c9b0285360a0b6288d76214a75ce3025bce4050)" 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.133900816Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.791543ms 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:08,881 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:50 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 23:16:50 policy-pap | [2024-03-10T23:14:51.742+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@71d2261e, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@53917c92, org.springframework.security.web.context.SecurityContextHolderFilter@7c359808, org.springframework.security.web.header.HeaderWriterFilter@52963839, org.springframework.security.web.authentication.logout.LogoutFilter@6787bd41, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@39420d59, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@16361e61, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@1734b1a, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@1fa796a4, org.springframework.security.web.access.ExceptionTranslationFilter@7ce4498f, org.springframework.security.web.access.intercept.AuthorizationFilter@f287a4e] 23:16:50 prometheus | ts=2024-03-10T23:14:11.513Z caller=main.go:613 level=info build_context="(go=go1.21.7, platform=linux/amd64, user=root@6213bb3ee580, date=20240226-11:36:26, tags=netgo,builtinassets,stringlabels)" 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.138785337Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 23:16:50 simulator | 2024-03-10 23:14:08,940 INFO org.onap.policy.models.simulators starting 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | [2024-03-10T23:14:52.629+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:50 prometheus | ts=2024-03-10T23:14:11.513Z caller=main.go:614 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.140327265Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.541587ms 23:16:50 simulator | 2024-03-10 23:14:08,940 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:50 policy-pap | [2024-03-10T23:14:52.737+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:50 prometheus | ts=2024-03-10T23:14:11.513Z caller=main.go:615 level=info fd_limits="(soft=1048576, hard=1048576)" 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.147499007Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 23:16:50 simulator | 2024-03-10 23:14:09,145 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | [2024-03-10T23:14:52.762+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 23:16:50 prometheus | ts=2024-03-10T23:14:11.513Z caller=main.go:616 level=info vm_limits="(soft=unlimited, hard=unlimited)" 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.156161837Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=8.66373ms 23:16:50 simulator | 2024-03-10 23:14:09,146 INFO org.onap.policy.models.simulators starting A&AI simulator 23:16:50 policy-db-migrator | 23:16:50 policy-pap | [2024-03-10T23:14:52.783+00:00|INFO|ServiceManager|main] Policy PAP starting 23:16:50 prometheus | ts=2024-03-10T23:14:11.516Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.161406234Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 23:16:50 simulator | 2024-03-10 23:14:09,261 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:50 policy-db-migrator | 23:16:50 policy-pap | [2024-03-10T23:14:52.783+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 23:16:50 prometheus | ts=2024-03-10T23:14:11.517Z caller=main.go:1118 level=info msg="Starting TSDB ..." 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.162376572Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=970.088µs 23:16:50 simulator | 2024-03-10 23:14:09,272 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:50 policy-db-migrator | > upgrade 0630-toscanodetype.sql 23:16:50 policy-pap | [2024-03-10T23:14:52.783+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 23:16:50 prometheus | ts=2024-03-10T23:14:11.519Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.16712492Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 23:16:50 simulator | 2024-03-10 23:14:09,274 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | [2024-03-10T23:14:52.784+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 23:16:50 prometheus | ts=2024-03-10T23:14:11.519Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.168143789Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.018579ms 23:16:50 simulator | 2024-03-10 23:14:09,281 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 23:16:50 policy-pap | [2024-03-10T23:14:52.784+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 23:16:50 prometheus | ts=2024-03-10T23:14:11.527Z caller=head.go:610 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.174675149Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 23:16:50 simulator | 2024-03-10 23:14:09,341 INFO Session workerName=node0 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | [2024-03-10T23:14:52.784+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 23:16:50 prometheus | ts=2024-03-10T23:14:11.527Z caller=head.go:692 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.86µs 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.198882587Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=24.207047ms 23:16:50 simulator | 2024-03-10 23:14:09,913 INFO Using GSON for REST calls 23:16:50 policy-db-migrator | 23:16:50 policy-pap | [2024-03-10T23:14:52.785+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 23:16:50 prometheus | ts=2024-03-10T23:14:11.527Z caller=head.go:700 level=info component=tsdb msg="Replaying WAL, this may take a while" 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.204931168Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 23:16:50 simulator | 2024-03-10 23:14:09,982 INFO Started o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE} 23:16:50 policy-db-migrator | 23:16:50 policy-pap | [2024-03-10T23:14:52.789+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=688a4207-1e5e-4290-924d-17e6019295ad, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@2a525f88 23:16:50 prometheus | ts=2024-03-10T23:14:11.527Z caller=head.go:771 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.231872296Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=26.941468ms 23:16:50 simulator | 2024-03-10 23:14:09,991 INFO Started A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 23:16:50 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 23:16:50 policy-pap | [2024-03-10T23:14:52.800+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=688a4207-1e5e-4290-924d-17e6019295ad, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:50 prometheus | ts=2024-03-10T23:14:11.527Z caller=head.go:808 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=126.812µs wal_replay_duration=374.908µs wbl_replay_duration=170ns total_replay_duration=531.11µs 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.237864527Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 23:16:50 simulator | 2024-03-10 23:14:09,999 INFO Started Server@45905bff{STARTING}[11.0.20,sto=0] @1610ms 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | [2024-03-10T23:14:52.801+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:50 prometheus | ts=2024-03-10T23:14:11.530Z caller=main.go:1139 level=info fs_type=EXT4_SUPER_MAGIC 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.238673642Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=808.724µs 23:16:50 simulator | 2024-03-10 23:14:10,000 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4274 ms. 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 23:16:50 policy-pap | allow.auto.create.topics = true 23:16:50 prometheus | ts=2024-03-10T23:14:11.530Z caller=main.go:1142 level=info msg="TSDB started" 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.244831615Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 23:16:50 simulator | 2024-03-10 23:14:10,010 INFO org.onap.policy.models.simulators starting SDNC simulator 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | auto.commit.interval.ms = 5000 23:16:50 prometheus | ts=2024-03-10T23:14:11.530Z caller=main.go:1324 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.245879225Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.04769ms 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,013 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:50 policy-db-migrator | 23:16:50 policy-pap | auto.include.jmx.reporter = true 23:16:50 prometheus | ts=2024-03-10T23:14:11.531Z caller=main.go:1361 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.002338ms db_storage=1.5µs remote_storage=1.81µs web_handler=660ns query_engine=1.26µs scrape=239.784µs scrape_sd=171.213µs notify=27.711µs notify_sd=11.4µs rules=1.87µs tracing=5.15µs 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.251373267Z level=info msg="Executing migration" id="add current_reason column related to current_state" 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,014 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:50 policy-db-migrator | 23:16:50 policy-pap | auto.offset.reset = latest 23:16:50 prometheus | ts=2024-03-10T23:14:11.531Z caller=main.go:1103 level=info msg="Server is ready to receive web requests." 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.257558321Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=6.185934ms 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,015 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:50 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 23:16:50 policy-pap | bootstrap.servers = [kafka:9092] 23:16:50 prometheus | ts=2024-03-10T23:14:11.531Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.265124041Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,017 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | check.crcs = true 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.269223896Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.099515ms 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,025 INFO Session workerName=node0 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:50 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.317390426Z level=info msg="Executing migration" id="create alert_rule table" 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,078 INFO Using GSON for REST calls 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | client.id = consumer-688a4207-1e5e-4290-924d-17e6019295ad-3 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.318429826Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.04068ms 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,088 INFO Started o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE} 23:16:50 policy-db-migrator | 23:16:50 policy-pap | client.rack = 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.323862706Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,090 INFO Started SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 23:16:50 policy-db-migrator | 23:16:50 policy-pap | connections.max.idle.ms = 540000 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.324935476Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.072469ms 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,091 INFO Started Server@45e37a7e{STARTING}[11.0.20,sto=0] @1702ms 23:16:50 policy-db-migrator | > upgrade 0660-toscaparameter.sql 23:16:50 policy-pap | default.api.timeout.ms = 60000 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.333400262Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 23:16:50 kafka | [2024-03-10 23:14:53,735] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,091 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4923 ms. 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | enable.auto.commit = true 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.337753433Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=4.358821ms 23:16:50 kafka | [2024-03-10 23:14:53,736] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,092 INFO org.onap.policy.models.simulators starting SO simulator 23:16:50 policy-pap | exclude.internal.topics = true 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.342835206Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 23:16:50 kafka | [2024-03-10 23:14:53,736] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,094 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:50 policy-pap | fetch.max.bytes = 52428800 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.344083629Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.242083ms 23:16:50 kafka | [2024-03-10 23:14:53,736] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,094 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:50 policy-pap | fetch.max.wait.ms = 500 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.350264813Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 23:16:50 kafka | [2024-03-10 23:14:53,736] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,095 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:50 policy-pap | fetch.min.bytes = 1 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.350336755Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=76.432µs 23:16:50 kafka | [2024-03-10 23:14:53,736] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,095 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:50 policy-pap | group.id = 688a4207-1e5e-4290-924d-17e6019295ad 23:16:50 policy-db-migrator | > upgrade 0670-toscapolicies.sql 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.359901311Z level=info msg="Executing migration" id="add column for to alert_rule" 23:16:50 kafka | [2024-03-10 23:14:53,736] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,104 INFO Session workerName=node0 23:16:50 policy-pap | group.instance.id = null 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.368345496Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=8.445955ms 23:16:50 kafka | [2024-03-10 23:14:53,736] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,158 INFO Using GSON for REST calls 23:16:50 policy-pap | heartbeat.interval.ms = 3000 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.373843188Z level=info msg="Executing migration" id="add column annotations to alert_rule" 23:16:50 kafka | [2024-03-10 23:14:53,736] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,171 INFO Started o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE} 23:16:50 policy-pap | interceptor.classes = [] 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.380132914Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.261017ms 23:16:50 simulator | 2024-03-10 23:14:10,172 INFO Started SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 23:16:50 policy-pap | internal.leave.group.on.close = true 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,736] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.385127715Z level=info msg="Executing migration" id="add column labels to alert_rule" 23:16:50 simulator | 2024-03-10 23:14:10,172 INFO Started Server@7516e4e5{STARTING}[11.0.20,sto=0] @1783ms 23:16:50 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.390894912Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.767217ms 23:16:50 kafka | [2024-03-10 23:14:53,736] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,172 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4922 ms. 23:16:50 policy-pap | isolation.level = read_uncommitted 23:16:50 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.397421472Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 23:16:50 kafka | [2024-03-10 23:14:53,736] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,173 INFO org.onap.policy.models.simulators starting VFC simulator 23:16:50 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.39838462Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=962.308µs 23:16:50 kafka | [2024-03-10 23:14:53,736] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,176 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:50 policy-pap | max.partition.fetch.bytes = 1048576 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.402672029Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 23:16:50 kafka | [2024-03-10 23:14:53,736] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,176 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:50 policy-pap | max.poll.interval.ms = 300000 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.404271298Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.598169ms 23:16:50 kafka | [2024-03-10 23:14:53,736] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,177 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:50 policy-pap | max.poll.records = 500 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.40871234Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 23:16:50 kafka | [2024-03-10 23:14:53,736] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,178 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:50 policy-pap | metadata.max.age.ms = 300000 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.414646559Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=5.934879ms 23:16:50 kafka | [2024-03-10 23:14:53,736] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,189 INFO Session workerName=node0 23:16:50 policy-pap | metric.reporters = [] 23:16:50 policy-db-migrator | > upgrade 0690-toscapolicy.sql 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.424182514Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 23:16:50 kafka | [2024-03-10 23:14:53,736] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,232 INFO Using GSON for REST calls 23:16:50 policy-pap | metrics.num.samples = 2 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.4326625Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=8.477996ms 23:16:50 kafka | [2024-03-10 23:14:53,736] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,240 INFO Started o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE} 23:16:50 simulator | 2024-03-10 23:14:10,241 INFO Started VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 23:16:50 policy-pap | metrics.recording.level = INFO 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.437104562Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 23:16:50 kafka | [2024-03-10 23:14:53,736] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 23:16:50 simulator | 2024-03-10 23:14:10,242 INFO Started Server@6f0b0a5e{STARTING}[11.0.20,sto=0] @1853ms 23:16:50 policy-pap | metrics.sample.window.ms = 30000 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.440025036Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=2.918514ms 23:16:50 kafka | [2024-03-10 23:14:53,736] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 23:16:50 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:50 simulator | 2024-03-10 23:14:10,242 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4935 ms. 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.443199374Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 23:16:50 kafka | [2024-03-10 23:14:53,736] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 23:16:50 policy-pap | receive.buffer.bytes = 65536 23:16:50 simulator | 2024-03-10 23:14:10,243 INFO org.onap.policy.models.simulators started 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.449191244Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=5.99133ms 23:16:50 kafka | [2024-03-10 23:14:53,736] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) 23:16:50 policy-pap | reconnect.backoff.max.ms = 1000 23:16:50 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 23:16:50 kafka | [2024-03-10 23:14:53,737] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.454575184Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 23:16:50 policy-pap | reconnect.backoff.ms = 50 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.460503502Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.927838ms 23:16:50 policy-pap | request.timeout.ms = 30000 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 23:16:50 kafka | [2024-03-10 23:14:53,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.463761233Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 23:16:50 policy-pap | retry.backoff.ms = 100 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.463957477Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=195.904µs 23:16:50 policy-pap | sasl.client.callback.handler.class = null 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.467757776Z level=info msg="Executing migration" id="create alert_rule_version table" 23:16:50 policy-pap | sasl.jaas.config = null 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.469015959Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.257853ms 23:16:50 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:50 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.475044281Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 23:16:50 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.477318872Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=2.274801ms 23:16:50 policy-pap | sasl.kerberos.service.name = null 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.480944959Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 23:16:50 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.481833335Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=887.856µs 23:16:50 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.485206697Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 23:16:50 policy-pap | sasl.login.callback.handler.class = null 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.485271299Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=64.911µs 23:16:50 policy-pap | sasl.login.class = null 23:16:50 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.489587838Z level=info msg="Executing migration" id="add column for to alert_rule_version" 23:16:50 policy-pap | sasl.login.connect.timeout.ms = null 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.494740733Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=5.151995ms 23:16:50 policy-pap | sasl.login.read.timeout.ms = null 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.499422189Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 23:16:50 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.505873088Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.450649ms 23:16:50 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.509512875Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 23:16:50 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.518887447Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=9.364792ms 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:50 policy-db-migrator | > upgrade 0730-toscaproperty.sql 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.524593412Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.529566884Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.969322ms 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.533118769Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | sasl.mechanism = GSSAPI 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.540902712Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=7.785783ms 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.54621933Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.546296021Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=81.181µs 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:50 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.54950267Z level=info msg="Executing migration" id=create_alert_configuration_table 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.550299465Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=797.135µs 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.554043974Z level=info msg="Executing migration" id="Add column default in alert_configuration" 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.561911539Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=7.867375ms 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.565923913Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.566043775Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=119.152µs 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:50 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.571644388Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.580414139Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=8.766741ms 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | security.protocol = PLAINTEXT 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.592623924Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | security.providers = null 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.594691112Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=2.063228ms 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | send.buffer.bytes = 131072 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.599837817Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | session.timeout.ms = 45000 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.607284904Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=7.445408ms 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:50 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.611962759Z level=info msg="Executing migration" id=create_ngalert_configuration_table 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.612844036Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=880.347µs 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | ssl.cipher.suites = null 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.617490621Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.618623533Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.132872ms 23:16:50 kafka | [2024-03-10 23:14:53,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.622386811Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 23:16:50 kafka | [2024-03-10 23:14:53,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | ssl.engine.factory.class = null 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.628778019Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.390378ms 23:16:50 kafka | [2024-03-10 23:14:53,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | ssl.key.password = null 23:16:50 policy-db-migrator | > upgrade 0770-toscarequirement.sql 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.634234959Z level=info msg="Executing migration" id="create provenance_type table" 23:16:50 kafka | [2024-03-10 23:14:53,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.635092156Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=856.986µs 23:16:50 kafka | [2024-03-10 23:14:53,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | ssl.keystore.certificate.chain = null 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.647757328Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 23:16:50 kafka | [2024-03-10 23:14:53,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | ssl.keystore.key = null 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.649646803Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.895925ms 23:16:50 kafka | [2024-03-10 23:14:53,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | ssl.keystore.location = null 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.701630939Z level=info msg="Executing migration" id="create alert_image table" 23:16:50 kafka | [2024-03-10 23:14:53,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.703126997Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.495258ms 23:16:50 policy-pap | ssl.keystore.password = null 23:16:50 kafka | [2024-03-10 23:14:53,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-pap | ssl.keystore.type = JKS 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.715488174Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 23:16:50 kafka | [2024-03-10 23:14:53,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-db-migrator | > upgrade 0780-toscarequirements.sql 23:16:50 policy-pap | ssl.protocol = TLSv1.3 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.717042573Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.560279ms 23:16:50 kafka | [2024-03-10 23:14:53,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | ssl.provider = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.73042016Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 23:16:50 kafka | [2024-03-10 23:14:53,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 23:16:50 policy-pap | ssl.secure.random.implementation = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.730476321Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=57.951µs 23:16:50 kafka | [2024-03-10 23:14:53,740] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.744113102Z level=info msg="Executing migration" id=create_alert_configuration_history_table 23:16:50 kafka | [2024-03-10 23:14:53,742] INFO [Broker id=1] Finished LeaderAndIsr request in 163ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | ssl.truststore.certificates = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.745167911Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.056989ms 23:16:50 kafka | [2024-03-10 23:14:53,746] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=0AZYR26wR5iORv2DNmvoOw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | ssl.truststore.location = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.754171406Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 23:16:50 kafka | [2024-03-10 23:14:53,757] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:50 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 23:16:50 policy-pap | ssl.truststore.password = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.755851557Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.679001ms 23:16:50 kafka | [2024-03-10 23:14:53,760] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | ssl.truststore.type = JKS 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.766118516Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 23:16:50 kafka | [2024-03-10 23:14:53,761] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:50 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.766585765Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 23:16:50 kafka | [2024-03-10 23:14:53,765] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.777046717Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,765] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:14:52.807+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.777895593Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=848.516µs 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,765] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:14:52.807+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.788603381Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 23:16:50 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 23:16:50 kafka | [2024-03-10 23:14:53,765] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:14:52.807+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710112492807 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.789829173Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.232483ms 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,765] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:14:52.807+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-688a4207-1e5e-4290-924d-17e6019295ad-3, groupId=688a4207-1e5e-4290-924d-17e6019295ad] Subscribed to topic(s): policy-pdp-pap 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.800159483Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 23:16:50 kafka | [2024-03-10 23:14:53,765] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:14:52.808+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.809118398Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=8.959835ms 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,765] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:14:52.808+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=d93a1df2-c75b-4d19-a8be-d65cd6bbf958, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@3f2ab6ec 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.813836585Z level=info msg="Executing migration" id="create library_element table v1" 23:16:50 policy-db-migrator | 23:16:50 policy-pap | [2024-03-10T23:14:52.808+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=d93a1df2-c75b-4d19-a8be-d65cd6bbf958, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.814949725Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.1165ms 23:16:50 kafka | [2024-03-10 23:14:53,765] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | [2024-03-10T23:14:52.808+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.822068776Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 23:16:50 kafka | [2024-03-10 23:14:53,765] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 23:16:50 policy-pap | allow.auto.create.topics = true 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.822852041Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=782.775µs 23:16:50 kafka | [2024-03-10 23:14:53,765] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | auto.commit.interval.ms = 5000 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.827026177Z level=info msg="Executing migration" id="create library_element_connection table v1" 23:16:50 kafka | [2024-03-10 23:14:53,765] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:50 policy-pap | auto.include.jmx.reporter = true 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.827861773Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=835.156µs 23:16:50 kafka | [2024-03-10 23:14:53,765] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | auto.offset.reset = latest 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.832801463Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 23:16:50 kafka | [2024-03-10 23:14:53,765] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | bootstrap.servers = [kafka:9092] 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.833856943Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.055ms 23:16:50 kafka | [2024-03-10 23:14:53,765] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | check.crcs = true 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.845718961Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 23:16:50 kafka | [2024-03-10 23:14:53,765] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.847025285Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.311284ms 23:16:50 policy-db-migrator | > upgrade 0820-toscatrigger.sql 23:16:50 kafka | [2024-03-10 23:14:53,765] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-pap | client.id = consumer-policy-pap-4 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.855135874Z level=info msg="Executing migration" id="increase max description length to 2048" 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,765] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-pap | client.rack = 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.855188105Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=56.601µs 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:50 kafka | [2024-03-10 23:14:53,765] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-pap | connections.max.idle.ms = 540000 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.859858761Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,765] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-pap | default.api.timeout.ms = 60000 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.859952813Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=96.202µs 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,765] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-pap | enable.auto.commit = true 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.86303466Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,765] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-pap | exclude.internal.topics = true 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.863311805Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=277.615µs 23:16:50 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:50 policy-pap | fetch.max.bytes = 52428800 23:16:50 kafka | [2024-03-10 23:14:53,765] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.868021171Z level=info msg="Executing migration" id="create data_keys table" 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | fetch.max.wait.ms = 500 23:16:50 kafka | [2024-03-10 23:14:53,765] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.868941518Z level=info msg="Migration successfully executed" id="create data_keys table" duration=920.147µs 23:16:50 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 23:16:50 kafka | [2024-03-10 23:14:53,765] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.873524083Z level=info msg="Executing migration" id="create secrets table" 23:16:50 policy-pap | fetch.min.bytes = 1 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,765] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.87445998Z level=info msg="Migration successfully executed" id="create secrets table" duration=932.697µs 23:16:50 policy-pap | group.id = policy-pap 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.879748427Z level=info msg="Executing migration" id="rename data_keys name column to id" 23:16:50 policy-pap | group.instance.id = null 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.909611567Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=29.86296ms 23:16:50 policy-pap | heartbeat.interval.ms = 3000 23:16:50 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.913486528Z level=info msg="Executing migration" id="add name column into data_keys" 23:16:50 policy-pap | interceptor.classes = [] 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.918539671Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.053143ms 23:16:50 policy-pap | internal.leave.group.on.close = true 23:16:50 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.925347696Z level=info msg="Executing migration" id="copy data_keys id column values into name" 23:16:50 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.925496739Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=148.643µs 23:16:50 policy-pap | isolation.level = read_uncommitted 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.930866027Z level=info msg="Executing migration" id="rename data_keys name column to label" 23:16:50 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.964270782Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=33.409875ms 23:16:50 policy-pap | max.partition.fetch.bytes = 1048576 23:16:50 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:15.970009118Z level=info msg="Executing migration" id="rename data_keys id column back to name" 23:16:50 policy-pap | max.poll.interval.ms = 300000 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.00054334Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=30.530591ms 23:16:50 policy-pap | max.poll.records = 500 23:16:50 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.006067331Z level=info msg="Executing migration" id="create kv_store table v1" 23:16:50 policy-pap | metadata.max.age.ms = 300000 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.007475947Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.415736ms 23:16:50 policy-pap | metric.reporters = [] 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.011038162Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 23:16:50 policy-pap | metrics.num.samples = 2 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.012263754Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.221812ms 23:16:50 policy-pap | metrics.recording.level = INFO 23:16:50 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-pap | metrics.sample.window.ms = 30000 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.018204003Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.018428028Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=224.395µs 23:16:50 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-pap | receive.buffer.bytes = 65536 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.021922742Z level=info msg="Executing migration" id="create permission table" 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-pap | reconnect.backoff.max.ms = 1000 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.023298066Z level=info msg="Migration successfully executed" id="create permission table" duration=1.373564ms 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 policy-pap | reconnect.backoff.ms = 50 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.027404022Z level=info msg="Executing migration" id="add unique index permission.role_id" 23:16:50 policy-db-migrator | 23:16:50 policy-pap | request.timeout.ms = 30000 23:16:50 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.029028361Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.623929ms 23:16:50 policy-pap | retry.backoff.ms = 100 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.033995181Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 23:16:50 policy-pap | sasl.client.callback.handler.class = null 23:16:50 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.035320906Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.363205ms 23:16:50 policy-pap | sasl.jaas.config = null 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.079960301Z level=info msg="Executing migration" id="create role table" 23:16:50 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.081533749Z level=info msg="Migration successfully executed" id="create role table" duration=1.573748ms 23:16:50 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.085511363Z level=info msg="Executing migration" id="add column display_name" 23:16:50 policy-pap | sasl.kerberos.service.name = null 23:16:50 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.094088549Z level=info msg="Migration successfully executed" id="add column display_name" duration=8.574356ms 23:16:50 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:50 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.098682302Z level=info msg="Executing migration" id="add column group_name" 23:16:50 policy-pap | sasl.login.callback.handler.class = null 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,766] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.105867114Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.181622ms 23:16:50 policy-pap | sasl.login.class = null 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,787] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.109673643Z level=info msg="Executing migration" id="add index role.org_id" 23:16:50 policy-pap | sasl.login.connect.timeout.ms = null 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,787] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.110426587Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=752.614µs 23:16:50 policy-pap | sasl.login.read.timeout.ms = null 23:16:50 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:50 kafka | [2024-03-10 23:14:53,787] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.114042413Z level=info msg="Executing migration" id="add unique index role_org_id_name" 23:16:50 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,787] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.114790687Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=747.964µs 23:16:50 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:50 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 23:16:50 kafka | [2024-03-10 23:14:53,787] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.119309479Z level=info msg="Executing migration" id="add index role_org_id_uid" 23:16:50 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.120394749Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.0881ms 23:16:50 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.125134236Z level=info msg="Executing migration" id="create team role table" 23:16:50 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.126469169Z level=info msg="Migration successfully executed" id="create team role table" duration=1.334123ms 23:16:50 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:50 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.130547644Z level=info msg="Executing migration" id="add index team_role.org_id" 23:16:50 policy-pap | sasl.mechanism = GSSAPI 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.132817285Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=2.272331ms 23:16:50 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:50 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.137832507Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 23:16:50 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.138935538Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.102151ms 23:16:50 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.143129904Z level=info msg="Executing migration" id="add index team_role.team_id" 23:16:50 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.144147642Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.017578ms 23:16:50 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:50 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.14783136Z level=info msg="Executing migration" id="create user role table" 23:16:50 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.148621524Z level=info msg="Migration successfully executed" id="create user role table" duration=789.704µs 23:16:50 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:50 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.153268649Z level=info msg="Executing migration" id="add index user_role.org_id" 23:16:50 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.15500985Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.737611ms 23:16:50 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.159049624Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 23:16:50 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.161493469Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=2.440645ms 23:16:50 policy-pap | security.protocol = PLAINTEXT 23:16:50 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.165720976Z level=info msg="Executing migration" id="add index user_role.user_id" 23:16:50 policy-pap | security.providers = null 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.166741825Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.020709ms 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:50 policy-pap | send.buffer.bytes = 131072 23:16:50 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.173803584Z level=info msg="Executing migration" id="create builtin role table" 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:50 policy-pap | session.timeout.ms = 45000 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.17469885Z level=info msg="Migration successfully executed" id="create builtin role table" duration=895.646µs 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:50 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.178802925Z level=info msg="Executing migration" id="add index builtin_role.role_id" 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:50 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.179901875Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.09867ms 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:50 policy-pap | ssl.cipher.suites = null 23:16:50 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.183564062Z level=info msg="Executing migration" id="add index builtin_role.name" 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:50 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.184653032Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.08871ms 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:50 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:50 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.190371146Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:50 policy-pap | ssl.engine.factory.class = null 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.198258811Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=7.887244ms 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:50 policy-pap | ssl.key.password = null 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.201878636Z level=info msg="Executing migration" id="add index builtin_role.org_id" 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:50 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.202959356Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.07969ms 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:50 policy-pap | ssl.keystore.certificate.chain = null 23:16:50 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.206809296Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:50 policy-pap | ssl.keystore.key = null 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.20814724Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.337194ms 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:50 policy-pap | ssl.keystore.location = null 23:16:50 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.214058138Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:50 policy-pap | ssl.keystore.password = null 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.215073197Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.011229ms 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:50 policy-pap | ssl.keystore.type = JKS 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.218580661Z level=info msg="Executing migration" id="add unique index role.uid" 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:50 policy-pap | ssl.protocol = TLSv1.3 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.219598949Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.017638ms 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:50 policy-pap | ssl.provider = null 23:16:50 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.225968866Z level=info msg="Executing migration" id="create seed assignment table" 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:50 policy-pap | ssl.secure.random.implementation = null 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.227886781Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.922275ms 23:16:50 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:50 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.232999434Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 23:16:50 policy-pap | ssl.truststore.certificates = null 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.234261928Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.262114ms 23:16:50 policy-pap | ssl.truststore.location = null 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.238635337Z level=info msg="Executing migration" id="add column hidden to role table" 23:16:50 policy-pap | ssl.truststore.password = null 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.249937463Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=11.303426ms 23:16:50 policy-pap | ssl.truststore.type = JKS 23:16:50 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.253730023Z level=info msg="Executing migration" id="permission kind migration" 23:16:50 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.261528975Z level=info msg="Migration successfully executed" id="permission kind migration" duration=7.798302ms 23:16:50 policy-pap | 23:16:50 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.266508526Z level=info msg="Executing migration" id="permission attribute migration" 23:16:50 policy-pap | [2024-03-10T23:14:52.813+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.274250467Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=7.740691ms 23:16:50 policy-pap | [2024-03-10T23:14:52.813+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.278166738Z level=info msg="Executing migration" id="permission identifier migration" 23:16:50 policy-pap | [2024-03-10T23:14:52.813+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710112492813 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.28373591Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.568132ms 23:16:50 policy-pap | [2024-03-10T23:14:52.813+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:50 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 23:16:50 kafka | [2024-03-10 23:14:53,792] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.288551368Z level=info msg="Executing migration" id="add permission identifier index" 23:16:50 policy-pap | [2024-03-10T23:14:52.813+00:00|INFO|ServiceManager|main] Policy PAP starting topics 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,792] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.289816191Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.264123ms 23:16:50 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:50 policy-pap | [2024-03-10T23:14:52.813+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=d93a1df2-c75b-4d19-a8be-d65cd6bbf958, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:50 kafka | [2024-03-10 23:14:53,800] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.294869813Z level=info msg="Executing migration" id="add permission action scope role_id index" 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | [2024-03-10T23:14:52.813+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=688a4207-1e5e-4290-924d-17e6019295ad, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:50 kafka | [2024-03-10 23:14:53,801] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.297073884Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=2.203131ms 23:16:50 policy-db-migrator | 23:16:50 policy-pap | [2024-03-10T23:14:52.814+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c74d1b9d-8c1e-4fdc-8ad4-969e7dc024a9, alive=false, publisher=null]]: starting 23:16:50 kafka | [2024-03-10 23:14:53,802] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.301562966Z level=info msg="Executing migration" id="remove permission role_id action scope index" 23:16:50 policy-db-migrator | 23:16:50 policy-pap | [2024-03-10T23:14:52.834+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:50 kafka | [2024-03-10 23:14:53,802] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.303001492Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.439036ms 23:16:50 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:50 policy-pap | acks = -1 23:16:50 kafka | [2024-03-10 23:14:53,802] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.308015574Z level=info msg="Executing migration" id="create query_history table v1" 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | auto.include.jmx.reporter = true 23:16:50 kafka | [2024-03-10 23:14:53,834] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.309054492Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.037719ms 23:16:50 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:50 policy-pap | batch.size = 16384 23:16:50 kafka | [2024-03-10 23:14:53,835] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.315044292Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | bootstrap.servers = [kafka:9092] 23:16:50 kafka | [2024-03-10 23:14:53,835] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.316279055Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.234123ms 23:16:50 policy-db-migrator | 23:16:50 policy-pap | buffer.memory = 33554432 23:16:50 kafka | [2024-03-10 23:14:53,835] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.321414239Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 23:16:50 policy-db-migrator | 23:16:50 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:50 kafka | [2024-03-10 23:14:53,836] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.321715564Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=302.505µs 23:16:50 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:50 policy-pap | client.id = producer-1 23:16:50 kafka | [2024-03-10 23:14:53,843] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.325824548Z level=info msg="Executing migration" id="rbac disabled migrator" 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | compression.type = none 23:16:50 kafka | [2024-03-10 23:14:53,844] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.326006772Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=182.844µs 23:16:50 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:50 policy-pap | connections.max.idle.ms = 540000 23:16:50 kafka | [2024-03-10 23:14:53,844] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.331183816Z level=info msg="Executing migration" id="teams permissions migration" 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | delivery.timeout.ms = 120000 23:16:50 kafka | [2024-03-10 23:14:53,844] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.331870049Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=669.152µs 23:16:50 policy-db-migrator | 23:16:50 policy-pap | enable.idempotence = true 23:16:50 kafka | [2024-03-10 23:14:53,844] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.335880162Z level=info msg="Executing migration" id="dashboard permissions" 23:16:50 policy-db-migrator | 23:16:50 policy-pap | interceptor.classes = [] 23:16:50 kafka | [2024-03-10 23:14:53,855] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.33851471Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=2.633508ms 23:16:50 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:50 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:50 kafka | [2024-03-10 23:14:53,855] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.345289774Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | linger.ms = 0 23:16:50 kafka | [2024-03-10 23:14:53,855] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.346568468Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.280794ms 23:16:50 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:50 policy-pap | max.block.ms = 60000 23:16:50 kafka | [2024-03-10 23:14:53,855] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.352331313Z level=info msg="Executing migration" id="drop managed folder create actions" 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | max.in.flight.requests.per.connection = 5 23:16:50 kafka | [2024-03-10 23:14:53,855] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.352668079Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=331.146µs 23:16:50 policy-db-migrator | 23:16:50 policy-pap | max.request.size = 1048576 23:16:50 kafka | [2024-03-10 23:14:53,866] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.356798814Z level=info msg="Executing migration" id="alerting notification permissions" 23:16:50 policy-db-migrator | 23:16:50 policy-pap | metadata.max.age.ms = 300000 23:16:50 kafka | [2024-03-10 23:14:53,866] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.357364485Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=565.171µs 23:16:50 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:50 policy-pap | metadata.max.idle.ms = 300000 23:16:50 kafka | [2024-03-10 23:14:53,866] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.361537561Z level=info msg="Executing migration" id="create query_history_star table v1" 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | metric.reporters = [] 23:16:50 kafka | [2024-03-10 23:14:53,866] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.362663681Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.125431ms 23:16:50 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:50 policy-pap | metrics.num.samples = 2 23:16:50 kafka | [2024-03-10 23:14:53,867] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.367410228Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | metrics.recording.level = INFO 23:16:50 kafka | [2024-03-10 23:14:53,879] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.368647891Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.236574ms 23:16:50 policy-db-migrator | 23:16:50 policy-pap | metrics.sample.window.ms = 30000 23:16:50 kafka | [2024-03-10 23:14:53,880] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.373126762Z level=info msg="Executing migration" id="add column org_id in query_history_star" 23:16:50 policy-db-migrator | 23:16:50 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:50 kafka | [2024-03-10 23:14:53,880] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.381308962Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.181389ms 23:16:50 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:50 policy-pap | partitioner.availability.timeout.ms = 0 23:16:50 kafka | [2024-03-10 23:14:53,880] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.384986118Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | partitioner.class = null 23:16:50 kafka | [2024-03-10 23:14:53,880] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.38505968Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=73.732µs 23:16:50 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:50 policy-pap | partitioner.ignore.keys = false 23:16:50 kafka | [2024-03-10 23:14:53,887] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.392261901Z level=info msg="Executing migration" id="create correlation table v1" 23:16:50 policy-pap | receive.buffer.bytes = 32768 23:16:50 kafka | [2024-03-10 23:14:53,888] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.393923472Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.661661ms 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | reconnect.backoff.max.ms = 1000 23:16:50 kafka | [2024-03-10 23:14:53,888] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.39823508Z level=info msg="Executing migration" id="add index correlations.uid" 23:16:50 policy-db-migrator | 23:16:50 policy-pap | reconnect.backoff.ms = 50 23:16:50 kafka | [2024-03-10 23:14:53,888] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.39927281Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.0375ms 23:16:50 policy-db-migrator | 23:16:50 policy-pap | request.timeout.ms = 30000 23:16:50 kafka | [2024-03-10 23:14:53,888] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.402795924Z level=info msg="Executing migration" id="add index correlations.source_uid" 23:16:50 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:50 policy-pap | retries = 2147483647 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.403850333Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.054159ms 23:16:50 kafka | [2024-03-10 23:14:53,894] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | retry.backoff.ms = 100 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.409511646Z level=info msg="Executing migration" id="add correlation config column" 23:16:50 kafka | [2024-03-10 23:14:53,894] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:50 policy-pap | sasl.client.callback.handler.class = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.420937515Z level=info msg="Migration successfully executed" id="add correlation config column" duration=11.414588ms 23:16:50 kafka | [2024-03-10 23:14:53,894] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | sasl.jaas.config = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.46227553Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 23:16:50 kafka | [2024-03-10 23:14:53,894] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:50 kafka | [2024-03-10 23:14:53,894] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.464800596Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=2.527296ms 23:16:50 policy-db-migrator | 23:16:50 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:50 kafka | [2024-03-10 23:14:53,902] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.46887477Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 23:16:50 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 23:16:50 policy-pap | sasl.kerberos.service.name = null 23:16:50 kafka | [2024-03-10 23:14:53,902] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.470625822Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.756242ms 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:50 kafka | [2024-03-10 23:14:53,902] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.477364605Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 23:16:50 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:50 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:50 kafka | [2024-03-10 23:14:53,902] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.501473015Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=24.108361ms 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | sasl.login.callback.handler.class = null 23:16:50 kafka | [2024-03-10 23:14:53,902] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.506453256Z level=info msg="Executing migration" id="create correlation v2" 23:16:50 policy-db-migrator | 23:16:50 policy-pap | sasl.login.class = null 23:16:50 kafka | [2024-03-10 23:14:53,910] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.507911022Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.443016ms 23:16:50 policy-db-migrator | 23:16:50 policy-pap | sasl.login.connect.timeout.ms = null 23:16:50 kafka | [2024-03-10 23:14:53,911] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 kafka | [2024-03-10 23:14:53,911] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.514538154Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 23:16:50 policy-pap | sasl.login.read.timeout.ms = null 23:16:50 kafka | [2024-03-10 23:14:53,911] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.515601673Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.059529ms 23:16:50 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:50 kafka | [2024-03-10 23:14:53,911] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.521101674Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 23:16:50 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:50 kafka | [2024-03-10 23:14:53,917] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.522240644Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.13858ms 23:16:50 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:50 kafka | [2024-03-10 23:14:53,918] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.526629984Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 23:16:50 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:50 kafka | [2024-03-10 23:14:53,918] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.527698404Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.067479ms 23:16:50 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:50 kafka | [2024-03-10 23:14:53,918] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.535380404Z level=info msg="Executing migration" id="copy correlation v1 to v2" 23:16:50 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:50 kafka | [2024-03-10 23:14:53,918] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | > upgrade 0100-pdp.sql 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.536118467Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=745.643µs 23:16:50 policy-pap | sasl.mechanism = GSSAPI 23:16:50 kafka | [2024-03-10 23:14:53,926] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.540179652Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 23:16:50 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:50 kafka | [2024-03-10 23:14:53,927] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.542347011Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=2.168919ms 23:16:50 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:50 kafka | [2024-03-10 23:14:53,927] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.548927391Z level=info msg="Executing migration" id="add provisioning column" 23:16:50 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:50 kafka | [2024-03-10 23:14:53,927] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.558206691Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.279689ms 23:16:50 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:50 kafka | [2024-03-10 23:14:53,927] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.563312244Z level=info msg="Executing migration" id="create entity_events table" 23:16:50 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:50 kafka | [2024-03-10 23:14:53,933] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.564298461Z level=info msg="Migration successfully executed" id="create entity_events table" duration=986.057µs 23:16:50 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:50 kafka | [2024-03-10 23:14:53,934] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.570844061Z level=info msg="Executing migration" id="create dashboard public config v1" 23:16:50 kafka | [2024-03-10 23:14:53,934] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:50 kafka | [2024-03-10 23:14:53,934] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:50 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:50 kafka | [2024-03-10 23:14:53,934] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.576017786Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=5.165095ms 23:16:50 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.581455976Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:50 kafka | [2024-03-10 23:14:53,941] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.581978105Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:50 kafka | [2024-03-10 23:14:53,942] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | security.protocol = PLAINTEXT 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.586915944Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:50 kafka | [2024-03-10 23:14:53,942] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 23:16:50 policy-pap | security.providers = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.587409534Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:50 kafka | [2024-03-10 23:14:53,942] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.591359846Z level=info msg="Executing migration" id="Drop old dashboard public config table" 23:16:50 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:50 kafka | [2024-03-10 23:14:53,942] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-pap | send.buffer.bytes = 131072 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.592355944Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=995.058µs 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,950] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.597525448Z level=info msg="Executing migration" id="recreate dashboard public config v1" 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,951] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.5992673Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.738612ms 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,951] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 23:16:50 policy-pap | ssl.cipher.suites = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.604367283Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 23:16:50 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 23:16:50 kafka | [2024-03-10 23:14:53,951] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.606182446Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.816903ms 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,951] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.611930381Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:50 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 23:16:50 kafka | [2024-03-10 23:14:53,957] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-pap | ssl.engine.factory.class = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.613339007Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.408146ms 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,958] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-pap | ssl.key.password = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.617987542Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,958] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 23:16:50 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.619236335Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.249163ms 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,958] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-pap | ssl.keystore.certificate.chain = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.627464935Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:50 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 23:16:50 kafka | [2024-03-10 23:14:53,958] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-pap | ssl.keystore.key = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.628628156Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.163821ms 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,968] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-pap | ssl.keystore.location = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.632222862Z level=info msg="Executing migration" id="Drop public config table" 23:16:50 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 23:16:50 kafka | [2024-03-10 23:14:53,968] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-pap | ssl.keystore.password = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.633122708Z level=info msg="Migration successfully executed" id="Drop public config table" duration=899.956µs 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,968] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 23:16:50 policy-pap | ssl.keystore.type = JKS 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.637740553Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,968] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-pap | ssl.protocol = TLSv1.3 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.639020706Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.279753ms 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,968] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-pap | ssl.provider = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.654232924Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 23:16:50 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 23:16:50 kafka | [2024-03-10 23:14:53,976] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-pap | ssl.secure.random.implementation = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.656670788Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=2.351682ms 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,976] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.662170478Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,976] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 23:16:50 policy-pap | ssl.truststore.certificates = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.664749795Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.576947ms 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:53,976] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-pap | ssl.truststore.location = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.668653127Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 23:16:50 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 23:16:50 kafka | [2024-03-10 23:14:53,977] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-pap | ssl.truststore.password = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.669832878Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.179991ms 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:53,983] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-pap | ssl.truststore.type = JKS 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.673563746Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 23:16:50 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 23:16:50 kafka | [2024-03-10 23:14:53,983] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-pap | transaction.timeout.ms = 60000 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.693994159Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=20.431143ms 23:16:50 kafka | [2024-03-10 23:14:53,983] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 23:16:50 policy-pap | transactional.id = null 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.699919098Z level=info msg="Executing migration" id="add annotations_enabled column" 23:16:50 kafka | [2024-03-10 23:14:53,983] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.711856665Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=11.935747ms 23:16:50 kafka | [2024-03-10 23:14:53,983] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-pap | 23:16:50 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.715339869Z level=info msg="Executing migration" id="add time_selection_enabled column" 23:16:50 kafka | [2024-03-10 23:14:53,992] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-pap | [2024-03-10T23:14:52.847+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.725134908Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=9.793839ms 23:16:50 kafka | [2024-03-10 23:14:53,992] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-pap | [2024-03-10T23:14:52.865+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:50 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.732200447Z level=info msg="Executing migration" id="delete orphaned public dashboards" 23:16:50 kafka | [2024-03-10 23:14:53,992] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 23:16:50 policy-pap | [2024-03-10T23:14:52.865+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.732612915Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=405.108µs 23:16:50 kafka | [2024-03-10 23:14:53,992] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-pap | [2024-03-10T23:14:52.865+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710112492865 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.736305942Z level=info msg="Executing migration" id="add share column" 23:16:50 kafka | [2024-03-10 23:14:53,992] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:14:52.865+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c74d1b9d-8c1e-4fdc-8ad4-969e7dc024a9, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.74495575Z level=info msg="Migration successfully executed" id="add share column" duration=8.644968ms 23:16:50 kafka | [2024-03-10 23:14:53,998] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-pap | [2024-03-10T23:14:52.865+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=ed1022b3-b5bd-40d9-8e00-fcdc015b52e2, alive=false, publisher=null]]: starting 23:16:50 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.748723288Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 23:16:50 kafka | [2024-03-10 23:14:53,998] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-pap | [2024-03-10T23:14:52.866+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.748928182Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=204.914µs 23:16:50 kafka | [2024-03-10 23:14:53,999] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 23:16:50 policy-pap | acks = -1 23:16:50 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.752503818Z level=info msg="Executing migration" id="create file table" 23:16:50 kafka | [2024-03-10 23:14:53,999] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-pap | auto.include.jmx.reporter = true 23:16:50 policy-db-migrator | JOIN pdpstatistics b 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.753475635Z level=info msg="Migration successfully executed" id="create file table" duration=971.297µs 23:16:50 policy-pap | batch.size = 16384 23:16:50 kafka | [2024-03-10 23:14:53,999] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.758619869Z level=info msg="Executing migration" id="file table idx: path natural pk" 23:16:50 policy-pap | bootstrap.servers = [kafka:9092] 23:16:50 kafka | [2024-03-10 23:14:54,008] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | SET a.id = b.id 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.759998924Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.380645ms 23:16:50 policy-pap | buffer.memory = 33554432 23:16:50 kafka | [2024-03-10 23:14:54,008] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.763992788Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 23:16:50 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:50 kafka | [2024-03-10 23:14:54,008] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.765678868Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.686121ms 23:16:50 policy-pap | client.id = producer-2 23:16:50 kafka | [2024-03-10 23:14:54,008] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.769349805Z level=info msg="Executing migration" id="create file_meta table" 23:16:50 kafka | [2024-03-10 23:14:54,008] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 23:16:50 policy-pap | compression.type = none 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.770127969Z level=info msg="Migration successfully executed" id="create file_meta table" duration=775.124µs 23:16:50 kafka | [2024-03-10 23:14:54,016] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | connections.max.idle.ms = 540000 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.777166877Z level=info msg="Executing migration" id="file table idx: path key" 23:16:50 kafka | [2024-03-10 23:14:54,016] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 23:16:50 policy-pap | delivery.timeout.ms = 120000 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.778891349Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.723702ms 23:16:50 kafka | [2024-03-10 23:14:54,016] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | enable.idempotence = true 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.784075064Z level=info msg="Executing migration" id="set path collation in file table" 23:16:50 kafka | [2024-03-10 23:14:54,016] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | interceptor.classes = [] 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.78442533Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=349.676µs 23:16:50 kafka | [2024-03-10 23:14:54,016] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.790138874Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 23:16:50 kafka | [2024-03-10 23:14:54,023] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 23:16:50 policy-pap | linger.ms = 0 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.790207036Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=68.832µs 23:16:50 kafka | [2024-03-10 23:14:54,024] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | max.block.ms = 60000 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.794725429Z level=info msg="Executing migration" id="managed permissions migration" 23:16:50 kafka | [2024-03-10 23:14:54,024] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 23:16:50 policy-pap | max.in.flight.requests.per.connection = 5 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.795128246Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=402.658µs 23:16:50 kafka | [2024-03-10 23:14:54,024] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | max.request.size = 1048576 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.801881218Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 23:16:50 kafka | [2024-03-10 23:14:54,024] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | metadata.max.age.ms = 300000 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.802036491Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=155.153µs 23:16:50 kafka | [2024-03-10 23:14:54,031] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | metadata.max.idle.ms = 300000 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.806086806Z level=info msg="Executing migration" id="RBAC action name migrator" 23:16:50 kafka | [2024-03-10 23:14:54,031] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 23:16:50 policy-pap | metric.reporters = [] 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.807139795Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.053368ms 23:16:50 kafka | [2024-03-10 23:14:54,031] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | metrics.num.samples = 2 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.852713906Z level=info msg="Executing migration" id="Add UID column to playlist" 23:16:50 kafka | [2024-03-10 23:14:54,031] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 23:16:50 policy-pap | metrics.recording.level = INFO 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.864475382Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=11.766846ms 23:16:50 kafka | [2024-03-10 23:14:54,031] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | metrics.sample.window.ms = 30000 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.867994126Z level=info msg="Executing migration" id="Update uid column values in playlist" 23:16:50 kafka | [2024-03-10 23:14:54,039] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.868138668Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=144.892µs 23:16:50 kafka | [2024-03-10 23:14:54,040] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | partitioner.availability.timeout.ms = 0 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.876017802Z level=info msg="Executing migration" id="Add index for uid in playlist" 23:16:50 kafka | [2024-03-10 23:14:54,040] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | > upgrade 0210-sequence.sql 23:16:50 policy-pap | partitioner.class = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.87754315Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.525008ms 23:16:50 kafka | [2024-03-10 23:14:54,040] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | partitioner.ignore.keys = false 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.883473788Z level=info msg="Executing migration" id="update group index for alert rules" 23:16:50 kafka | [2024-03-10 23:14:54,040] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.884261093Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=795.725µs 23:16:50 kafka | [2024-03-10 23:14:54,047] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-pap | receive.buffer.bytes = 32768 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.893488751Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 23:16:50 kafka | [2024-03-10 23:14:54,047] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-pap | reconnect.backoff.max.ms = 1000 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.893849457Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=356.246µs 23:16:50 kafka | [2024-03-10 23:14:54,048] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 23:16:50 policy-pap | reconnect.backoff.ms = 50 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.897625226Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 23:16:50 kafka | [2024-03-10 23:14:54,048] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-pap | request.timeout.ms = 30000 23:16:50 policy-db-migrator | > upgrade 0220-sequence.sql 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.898627625Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=1.001429ms 23:16:50 kafka | [2024-03-10 23:14:54,048] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-pap | retries = 2147483647 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.90551855Z level=info msg="Executing migration" id="add action column to seed_assignment" 23:16:50 kafka | [2024-03-10 23:14:54,054] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-pap | retry.backoff.ms = 100 23:16:50 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.916510851Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=11.031052ms 23:16:50 kafka | [2024-03-10 23:14:54,054] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-pap | sasl.client.callback.handler.class = null 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.92082625Z level=info msg="Executing migration" id="add scope column to seed_assignment" 23:16:50 kafka | [2024-03-10 23:14:54,054] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 23:16:50 policy-pap | sasl.jaas.config = null 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.92855224Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.72889ms 23:16:50 kafka | [2024-03-10 23:14:54,054] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.934315766Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 23:16:50 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:50 kafka | [2024-03-10 23:14:54,054] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.935454007Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.138431ms 23:16:50 policy-pap | sasl.kerberos.service.name = null 23:16:50 kafka | [2024-03-10 23:14:54,060] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:16.938791498Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 23:16:50 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:50 kafka | [2024-03-10 23:14:54,060] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.024931218Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=86.13207ms 23:16:50 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:50 kafka | [2024-03-10 23:14:54,060] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.029023544Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 23:16:50 policy-pap | sasl.login.callback.handler.class = null 23:16:50 kafka | [2024-03-10 23:14:54,061] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | sasl.login.class = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.030063974Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.03991ms 23:16:50 kafka | [2024-03-10 23:14:54,061] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | sasl.login.connect.timeout.ms = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.036198218Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 23:16:50 kafka | [2024-03-10 23:14:54,075] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 23:16:50 policy-pap | sasl.login.read.timeout.ms = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.037814117Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.615589ms 23:16:50 kafka | [2024-03-10 23:14:54,076] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.04224562Z level=info msg="Executing migration" id="add primary key to seed_assigment" 23:16:50 kafka | [2024-03-10 23:14:54,076] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 23:16:50 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.07018224Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=27.935599ms 23:16:50 kafka | [2024-03-10 23:14:54,077] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.076372424Z level=info msg="Executing migration" id="add origin column to seed_assignment" 23:16:50 kafka | [2024-03-10 23:14:54,077] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.082888906Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.515272ms 23:16:50 kafka | [2024-03-10 23:14:54,085] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.086515193Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 23:16:50 kafka | [2024-03-10 23:14:54,085] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | > upgrade 0120-toscatrigger.sql 23:16:50 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.086826059Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=311.006µs 23:16:50 kafka | [2024-03-10 23:14:54,085] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | sasl.mechanism = GSSAPI 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.089992278Z level=info msg="Executing migration" id="prevent seeding OnCall access" 23:16:50 kafka | [2024-03-10 23:14:54,086] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 23:16:50 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.090180511Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=186.293µs 23:16:50 kafka | [2024-03-10 23:14:54,086] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.093410121Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 23:16:50 kafka | [2024-03-10 23:14:54,095] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.093620335Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=209.874µs 23:16:50 kafka | [2024-03-10 23:14:54,095] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.098482316Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 23:16:50 kafka | [2024-03-10 23:14:54,095] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 23:16:50 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.098689149Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=206.433µs 23:16:50 kafka | [2024-03-10 23:14:54,095] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.102042022Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 23:16:50 kafka | [2024-03-10 23:14:54,096] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 23:16:50 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.102263846Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=218.384µs 23:16:50 kafka | [2024-03-10 23:14:54,104] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.106260031Z level=info msg="Executing migration" id="create folder table" 23:16:50 kafka | [2024-03-10 23:14:54,104] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.107175047Z level=info msg="Migration successfully executed" id="create folder table" duration=914.646µs 23:16:50 kafka | [2024-03-10 23:14:54,105] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.111690282Z level=info msg="Executing migration" id="Add index for parent_uid" 23:16:50 kafka | [2024-03-10 23:14:54,105] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | > upgrade 0140-toscaparameter.sql 23:16:50 policy-pap | security.protocol = PLAINTEXT 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.112914684Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.223852ms 23:16:50 kafka | [2024-03-10 23:14:54,105] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | security.providers = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.116559922Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 23:16:50 kafka | [2024-03-10 23:14:54,113] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 23:16:50 policy-pap | send.buffer.bytes = 131072 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.117742694Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.182872ms 23:16:50 kafka | [2024-03-10 23:14:54,114] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.121780299Z level=info msg="Executing migration" id="Update folder title length" 23:16:50 kafka | [2024-03-10 23:14:54,114] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.121820729Z level=info msg="Migration successfully executed" id="Update folder title length" duration=41.82µs 23:16:50 kafka | [2024-03-10 23:14:54,114] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | ssl.cipher.suites = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.127169019Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 23:16:50 kafka | [2024-03-10 23:14:54,114] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | > upgrade 0150-toscaproperty.sql 23:16:50 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.128970493Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.800864ms 23:16:50 kafka | [2024-03-10 23:14:54,123] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.1331745Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 23:16:50 kafka | [2024-03-10 23:14:54,125] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 23:16:50 policy-pap | ssl.engine.factory.class = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.134949394Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.775114ms 23:16:50 kafka | [2024-03-10 23:14:54,125] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | ssl.key.password = null 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.138782725Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 23:16:50 kafka | [2024-03-10 23:14:54,125] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.140902685Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=2.12218ms 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | ssl.keystore.certificate.chain = null 23:16:50 kafka | [2024-03-10 23:14:54,125] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.146528239Z level=info msg="Executing migration" id="Sync dashboard and folder table" 23:16:50 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 23:16:50 policy-pap | ssl.keystore.key = null 23:16:50 kafka | [2024-03-10 23:14:54,134] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.147007908Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=479.619µs 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | ssl.keystore.location = null 23:16:50 kafka | [2024-03-10 23:14:54,135] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.150714038Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 23:16:50 policy-db-migrator | 23:16:50 policy-pap | ssl.keystore.password = null 23:16:50 kafka | [2024-03-10 23:14:54,135] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.150979673Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=266.015µs 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | ssl.keystore.type = JKS 23:16:50 kafka | [2024-03-10 23:14:54,135] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.155207481Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 23:16:50 policy-pap | ssl.protocol = TLSv1.3 23:16:50 kafka | [2024-03-10 23:14:54,135] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.156271031Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.06257ms 23:16:50 policy-pap | ssl.provider = null 23:16:50 kafka | [2024-03-10 23:14:54,144] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.160567941Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 23:16:50 policy-pap | ssl.secure.random.implementation = null 23:16:50 kafka | [2024-03-10 23:14:54,145] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.162459316Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.890305ms 23:16:50 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:50 kafka | [2024-03-10 23:14:54,145] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.166660834Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 23:16:50 policy-pap | ssl.truststore.certificates = null 23:16:50 kafka | [2024-03-10 23:14:54,145] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.168284604Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.6238ms 23:16:50 policy-pap | ssl.truststore.location = null 23:16:50 kafka | [2024-03-10 23:14:54,145] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.172092445Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 23:16:50 policy-pap | ssl.truststore.password = null 23:16:50 kafka | [2024-03-10 23:14:54,185] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.173345889Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.252684ms 23:16:50 policy-pap | ssl.truststore.type = JKS 23:16:50 kafka | [2024-03-10 23:14:54,186] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.177811142Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 23:16:50 policy-pap | transaction.timeout.ms = 60000 23:16:50 kafka | [2024-03-10 23:14:54,187] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.17932355Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.511249ms 23:16:50 policy-pap | transactional.id = null 23:16:50 kafka | [2024-03-10 23:14:54,187] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.18307386Z level=info msg="Executing migration" id="create anon_device table" 23:16:50 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:50 kafka | [2024-03-10 23:14:54,187] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.184473816Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.399417ms 23:16:50 policy-pap | 23:16:50 kafka | [2024-03-10 23:14:54,197] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.22926616Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 23:16:50 policy-pap | [2024-03-10T23:14:52.866+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 23:16:50 kafka | [2024-03-10 23:14:54,198] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.231171755Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.907555ms 23:16:50 policy-pap | [2024-03-10T23:14:52.869+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:50 kafka | [2024-03-10 23:14:54,198] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.237199838Z level=info msg="Executing migration" id="add index anon_device.updated_at" 23:16:50 policy-pap | [2024-03-10T23:14:52.869+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:50 kafka | [2024-03-10 23:14:54,198] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 23:16:50 policy-pap | [2024-03-10T23:14:52.869+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710112492869 23:16:50 kafka | [2024-03-10 23:14:54,199] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.239069062Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.868904ms 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | [2024-03-10T23:14:52.869+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=ed1022b3-b5bd-40d9-8e00-fcdc015b52e2, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:50 kafka | [2024-03-10 23:14:54,208] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.243701018Z level=info msg="Executing migration" id="create signing_key table" 23:16:50 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:50 policy-pap | [2024-03-10T23:14:52.869+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 23:16:50 kafka | [2024-03-10 23:14:54,210] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.244807638Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.10573ms 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | [2024-03-10T23:14:52.870+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 23:16:50 kafka | [2024-03-10 23:14:54,210] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.250222729Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 23:16:50 policy-db-migrator | 23:16:50 policy-pap | [2024-03-10T23:14:52.872+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 23:16:50 kafka | [2024-03-10 23:14:54,210] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.251352391Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.128972ms 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | [2024-03-10T23:14:52.872+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 23:16:50 kafka | [2024-03-10 23:14:54,210] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.256091558Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 23:16:50 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 23:16:50 policy-pap | [2024-03-10T23:14:52.875+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 23:16:50 kafka | [2024-03-10 23:14:54,219] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.25778485Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.702052ms 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | [2024-03-10T23:14:52.875+00:00|INFO|TimerManager|Thread-9] timer manager update started 23:16:50 kafka | [2024-03-10 23:14:54,219] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.264656728Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 23:16:50 policy-db-migrator | 23:16:50 policy-pap | [2024-03-10T23:14:52.876+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 23:16:50 kafka | [2024-03-10 23:14:54,219] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.265130748Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=474.89µs 23:16:50 policy-db-migrator | 23:16:50 policy-pap | [2024-03-10T23:14:52.877+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 23:16:50 kafka | [2024-03-10 23:14:54,219] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.269276284Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 23:16:50 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 23:16:50 policy-pap | [2024-03-10T23:14:52.877+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 23:16:50 kafka | [2024-03-10 23:14:54,220] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.277834923Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=8.558739ms 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | [2024-03-10T23:14:52.878+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 23:16:50 kafka | [2024-03-10 23:14:54,229] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.281043633Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 23:16:50 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 23:16:50 policy-pap | [2024-03-10T23:14:52.878+00:00|INFO|ServiceManager|main] Policy PAP started 23:16:50 kafka | [2024-03-10 23:14:54,229] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.281740406Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=700.773µs 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | [2024-03-10T23:14:52.879+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 11.024 seconds (process running for 11.697) 23:16:50 kafka | [2024-03-10 23:14:54,229] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.285347633Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 23:16:50 policy-db-migrator | 23:16:50 policy-pap | [2024-03-10T23:14:53.410+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: dVKmUcACQYWhG0JC5XUpMQ 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.286621987Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.274053ms 23:16:50 kafka | [2024-03-10 23:14:54,230] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | [2024-03-10T23:14:53.412+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: dVKmUcACQYWhG0JC5XUpMQ 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.292643859Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 23:16:50 kafka | [2024-03-10 23:14:54,230] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | > upgrade 0100-upgrade.sql 23:16:50 policy-pap | [2024-03-10T23:14:53.413+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.293847772Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.199683ms 23:16:50 kafka | [2024-03-10 23:14:54,237] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | [2024-03-10T23:14:53.413+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: dVKmUcACQYWhG0JC5XUpMQ 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.297769404Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 23:16:50 kafka | [2024-03-10 23:14:54,238] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | select 'upgrade to 1100 completed' as msg 23:16:50 policy-pap | [2024-03-10T23:14:53.512+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-688a4207-1e5e-4290-924d-17e6019295ad-3, groupId=688a4207-1e5e-4290-924d-17e6019295ad] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.298920866Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.150682ms 23:16:50 kafka | [2024-03-10 23:14:54,238] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | [2024-03-10T23:14:53.512+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-688a4207-1e5e-4290-924d-17e6019295ad-3, groupId=688a4207-1e5e-4290-924d-17e6019295ad] Cluster ID: dVKmUcACQYWhG0JC5XUpMQ 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.304920878Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 23:16:50 kafka | [2024-03-10 23:14:54,238] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | [2024-03-10T23:14:53.531+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.306431925Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.510337ms 23:16:50 kafka | [2024-03-10 23:14:54,238] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | msg 23:16:50 policy-pap | [2024-03-10T23:14:53.549+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.309685026Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 23:16:50 kafka | [2024-03-10 23:14:54,248] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | upgrade to 1100 completed 23:16:50 policy-pap | [2024-03-10T23:14:53.550+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.311920817Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=2.235671ms 23:16:50 kafka | [2024-03-10 23:14:54,249] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | 23:16:50 policy-pap | [2024-03-10T23:14:53.637+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-688a4207-1e5e-4290-924d-17e6019295ad-3, groupId=688a4207-1e5e-4290-924d-17e6019295ad] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.315166248Z level=info msg="Executing migration" id="create sso_setting table" 23:16:50 kafka | [2024-03-10 23:14:54,249] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 23:16:50 policy-pap | [2024-03-10T23:14:53.650+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.316957631Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.790433ms 23:16:50 kafka | [2024-03-10 23:14:54,249] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | -------------- 23:16:50 policy-pap | [2024-03-10T23:14:53.752+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-688a4207-1e5e-4290-924d-17e6019295ad-3, groupId=688a4207-1e5e-4290-924d-17e6019295ad] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.322951293Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 23:16:50 kafka | [2024-03-10 23:14:54,250] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 23:16:50 policy-pap | [2024-03-10T23:14:53.758+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.32379908Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=847.736µs 23:16:50 kafka | [2024-03-10 23:14:54,258] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.327091701Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 23:16:50 policy-pap | [2024-03-10T23:14:54.379+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-688a4207-1e5e-4290-924d-17e6019295ad-3, groupId=688a4207-1e5e-4290-924d-17e6019295ad] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:50 kafka | [2024-03-10 23:14:54,259] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.327664541Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=573.34µs 23:16:50 policy-pap | [2024-03-10T23:14:54.382+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:50 kafka | [2024-03-10 23:14:54,259] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.331087244Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 23:16:50 policy-pap | [2024-03-10T23:14:54.387+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:50 kafka | [2024-03-10 23:14:54,259] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.331336449Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=249.195µs 23:16:50 policy-pap | [2024-03-10T23:14:54.387+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-688a4207-1e5e-4290-924d-17e6019295ad-3, groupId=688a4207-1e5e-4290-924d-17e6019295ad] (Re-)joining group 23:16:50 kafka | [2024-03-10 23:14:54,259] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.334720482Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 23:16:50 policy-pap | [2024-03-10T23:14:54.418+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-688a4207-1e5e-4290-924d-17e6019295ad-3, groupId=688a4207-1e5e-4290-924d-17e6019295ad] Request joining group due to: need to re-join with the given member-id: consumer-688a4207-1e5e-4290-924d-17e6019295ad-3-7b8ba1bf-5716-4996-a24c-cc177b61a35a 23:16:50 kafka | [2024-03-10 23:14:54,267] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.343696539Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=8.975307ms 23:16:50 policy-pap | [2024-03-10T23:14:54.419+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-688a4207-1e5e-4290-924d-17e6019295ad-3, groupId=688a4207-1e5e-4290-924d-17e6019295ad] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:50 kafka | [2024-03-10 23:14:54,267] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.349499037Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 23:16:50 policy-pap | [2024-03-10T23:14:54.419+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-688a4207-1e5e-4290-924d-17e6019295ad-3, groupId=688a4207-1e5e-4290-924d-17e6019295ad] (Re-)joining group 23:16:50 kafka | [2024-03-10 23:14:54,267] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.356741412Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=7.241205ms 23:16:50 policy-pap | [2024-03-10T23:14:54.428+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-cd681474-2f01-4073-b953-a2ec7226f594 23:16:50 kafka | [2024-03-10 23:14:54,267] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.360312549Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 23:16:50 policy-pap | [2024-03-10T23:14:54.428+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:50 kafka | [2024-03-10 23:14:54,267] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.360731177Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=418.258µs 23:16:50 policy-pap | [2024-03-10T23:14:54.428+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:50 kafka | [2024-03-10 23:14:54,276] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=migrator t=2024-03-10T23:14:17.364202771Z level=info msg="migrations completed" performed=547 skipped=0 duration=4.852928327s 23:16:50 policy-pap | [2024-03-10T23:14:57.458+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-cd681474-2f01-4073-b953-a2ec7226f594', protocol='range'} 23:16:50 kafka | [2024-03-10 23:14:54,276] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=sqlstore t=2024-03-10T23:14:17.373548564Z level=info msg="Created default admin" user=admin 23:16:50 policy-pap | [2024-03-10T23:14:57.468+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-cd681474-2f01-4073-b953-a2ec7226f594=Assignment(partitions=[policy-pdp-pap-0])} 23:16:50 kafka | [2024-03-10 23:14:54,276] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=sqlstore t=2024-03-10T23:14:17.373939902Z level=info msg="Created default organization" 23:16:50 policy-pap | [2024-03-10T23:14:57.469+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-688a4207-1e5e-4290-924d-17e6019295ad-3, groupId=688a4207-1e5e-4290-924d-17e6019295ad] Successfully joined group with generation Generation{generationId=1, memberId='consumer-688a4207-1e5e-4290-924d-17e6019295ad-3-7b8ba1bf-5716-4996-a24c-cc177b61a35a', protocol='range'} 23:16:50 kafka | [2024-03-10 23:14:54,276] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | > upgrade 0120-audit_sequence.sql 23:16:50 grafana | logger=secrets t=2024-03-10T23:14:17.379789572Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 23:16:50 policy-pap | [2024-03-10T23:14:57.470+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-688a4207-1e5e-4290-924d-17e6019295ad-3, groupId=688a4207-1e5e-4290-924d-17e6019295ad] Finished assignment for group at generation 1: {consumer-688a4207-1e5e-4290-924d-17e6019295ad-3-7b8ba1bf-5716-4996-a24c-cc177b61a35a=Assignment(partitions=[policy-pdp-pap-0])} 23:16:50 kafka | [2024-03-10 23:14:54,277] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=plugin.store t=2024-03-10T23:14:17.4184393Z level=info msg="Loading plugins..." 23:16:50 policy-pap | [2024-03-10T23:14:57.505+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-cd681474-2f01-4073-b953-a2ec7226f594', protocol='range'} 23:16:50 kafka | [2024-03-10 23:14:54,283] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:50 grafana | logger=local.finder t=2024-03-10T23:14:17.469052292Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 23:16:50 policy-pap | [2024-03-10T23:14:57.506+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:50 kafka | [2024-03-10 23:14:54,283] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=plugin.store t=2024-03-10T23:14:17.469143944Z level=info msg="Plugins loaded" count=55 duration=50.705763ms 23:16:50 policy-pap | [2024-03-10T23:14:57.511+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-688a4207-1e5e-4290-924d-17e6019295ad-3, groupId=688a4207-1e5e-4290-924d-17e6019295ad] Successfully synced group in generation Generation{generationId=1, memberId='consumer-688a4207-1e5e-4290-924d-17e6019295ad-3-7b8ba1bf-5716-4996-a24c-cc177b61a35a', protocol='range'} 23:16:50 kafka | [2024-03-10 23:14:54,283] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=query_data t=2024-03-10T23:14:17.472060758Z level=info msg="Query Service initialization" 23:16:50 policy-pap | [2024-03-10T23:14:57.512+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-688a4207-1e5e-4290-924d-17e6019295ad-3, groupId=688a4207-1e5e-4290-924d-17e6019295ad] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:50 kafka | [2024-03-10 23:14:54,283] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=live.push_http t=2024-03-10T23:14:17.475833049Z level=info msg="Live Push Gateway initialization" 23:16:50 policy-pap | [2024-03-10T23:14:57.513+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-688a4207-1e5e-4290-924d-17e6019295ad-3, groupId=688a4207-1e5e-4290-924d-17e6019295ad] Adding newly assigned partitions: policy-pdp-pap-0 23:16:50 kafka | [2024-03-10 23:14:54,283] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 23:16:50 grafana | logger=ngalert.migration t=2024-03-10T23:14:17.482681886Z level=info msg=Starting 23:16:50 policy-pap | [2024-03-10T23:14:57.516+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 23:16:50 kafka | [2024-03-10 23:14:54,289] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=ngalert.migration t=2024-03-10T23:14:17.483481991Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 23:16:50 policy-pap | [2024-03-10T23:14:57.538+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-688a4207-1e5e-4290-924d-17e6019295ad-3, groupId=688a4207-1e5e-4290-924d-17e6019295ad] Found no committed offset for partition policy-pdp-pap-0 23:16:50 kafka | [2024-03-10 23:14:54,290] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=ngalert.migration orgID=1 t=2024-03-10T23:14:17.483904569Z level=info msg="Migrating alerts for organisation" 23:16:50 policy-pap | [2024-03-10T23:14:57.539+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 23:16:50 kafka | [2024-03-10 23:14:54,290] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 23:16:50 policy-db-migrator | 23:16:50 grafana | logger=ngalert.migration orgID=1 t=2024-03-10T23:14:17.48451082Z level=info msg="Alerts found to migrate" alerts=0 23:16:50 policy-pap | [2024-03-10T23:14:57.562+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:50 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 23:16:50 grafana | logger=ngalert.migration t=2024-03-10T23:14:17.486226292Z level=info msg="Completed alerting migration" 23:16:50 kafka | [2024-03-10 23:14:54,290] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 policy-pap | [2024-03-10T23:14:57.566+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-688a4207-1e5e-4290-924d-17e6019295ad-3, groupId=688a4207-1e5e-4290-924d-17e6019295ad] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:50 policy-db-migrator | -------------- 23:16:50 grafana | logger=ngalert.state.manager t=2024-03-10T23:14:17.522115929Z level=info msg="Running in alternative execution of Error/NoData mode" 23:16:50 kafka | [2024-03-10 23:14:54,290] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:14:58.110+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:50 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:50 kafka | [2024-03-10 23:14:54,300] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:50 grafana | logger=infra.usagestats.collector t=2024-03-10T23:14:17.524209819Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 23:16:50 policy-pap | [2024-03-10T23:14:58.110+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:54,301] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:50 grafana | logger=provisioning.datasources t=2024-03-10T23:14:17.526460921Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 23:16:50 policy-pap | [2024-03-10T23:14:58.112+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 2 ms 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:54,301] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 23:16:50 grafana | logger=provisioning.alerting t=2024-03-10T23:14:17.541567012Z level=info msg="starting to provision alerting" 23:16:50 policy-pap | [2024-03-10T23:15:14.855+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:54,301] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 23:16:50 grafana | logger=provisioning.alerting t=2024-03-10T23:14:17.541589122Z level=info msg="finished to provision alerting" 23:16:50 policy-pap | [] 23:16:50 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:50 kafka | [2024-03-10 23:14:54,301] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(N4B-uQPVQqWvv830YyfZ-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:50 grafana | logger=ngalert.state.manager t=2024-03-10T23:14:17.541825606Z level=info msg="Warming state cache for startup" 23:16:50 policy-pap | [2024-03-10T23:15:14.856+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:54,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:50 grafana | logger=ngalert.multiorg.alertmanager t=2024-03-10T23:14:17.542478989Z level=info msg="Starting MultiOrg Alertmanager" 23:16:50 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"eee9c567-b75b-41ed-b746-919bec03d1e2","timestampMs":1710112514815,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup"} 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:54,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:50 grafana | logger=grafanaStorageLogger t=2024-03-10T23:14:17.542584271Z level=info msg="Storage starting" 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:54,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:15:14.856+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:50 grafana | logger=http.server t=2024-03-10T23:14:17.546394372Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 23:16:50 policy-db-migrator | TRUNCATE TABLE sequence 23:16:50 kafka | [2024-03-10 23:14:54,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:50 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"eee9c567-b75b-41ed-b746-919bec03d1e2","timestampMs":1710112514815,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup"} 23:16:50 grafana | logger=ngalert.state.manager t=2024-03-10T23:14:17.575983423Z level=info msg="State cache has been initialized" states=0 duration=34.153377ms 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:54,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:15:14.867+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:50 grafana | logger=ngalert.scheduler t=2024-03-10T23:14:17.576064944Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:54,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:15:14.959+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpUpdate starting 23:16:50 grafana | logger=ticker t=2024-03-10T23:14:17.576168096Z level=info msg=starting first_tick=2024-03-10T23:14:20Z 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:54,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:15:14.959+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpUpdate starting listener 23:16:50 grafana | logger=provisioning.dashboard t=2024-03-10T23:14:17.610007636Z level=info msg="starting to provision dashboards" 23:16:50 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 23:16:50 kafka | [2024-03-10 23:14:54,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:15:14.960+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpUpdate starting timer 23:16:50 grafana | logger=sqlstore.transactions t=2024-03-10T23:14:17.646016096Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:54,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:15:14.961+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=ceb9152d-bcfc-4b6d-8a99-09a6db2bceb9, expireMs=1710112544961] 23:16:50 grafana | logger=sqlstore.transactions t=2024-03-10T23:14:17.658872255Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 23:16:50 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 23:16:50 kafka | [2024-03-10 23:14:54,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:15:14.962+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=ceb9152d-bcfc-4b6d-8a99-09a6db2bceb9, expireMs=1710112544961] 23:16:50 grafana | logger=sqlstore.transactions t=2024-03-10T23:14:17.678535371Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:54,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:15:14.962+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpUpdate starting enqueue 23:16:50 grafana | logger=plugins.update.checker t=2024-03-10T23:14:17.682488685Z level=info msg="Update check succeeded" duration=139.559578ms 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:54,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:15:14.963+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpUpdate started 23:16:50 grafana | logger=sqlstore.transactions t=2024-03-10T23:14:17.68979147Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:54,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:15:14.965+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:50 grafana | logger=sqlstore.transactions t=2024-03-10T23:14:17.709434386Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 code="database is locked" 23:16:50 policy-db-migrator | DROP TABLE pdpstatistics 23:16:50 kafka | [2024-03-10 23:14:54,306] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:50 policy-pap | {"source":"pap-004a69b5-0086-4406-9a7b-3ee3aaedaa3f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"ceb9152d-bcfc-4b6d-8a99-09a6db2bceb9","timestampMs":1710112514941,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:50 grafana | logger=grafana.update.checker t=2024-03-10T23:14:17.709847154Z level=info msg="Update check succeeded" duration=167.1878ms 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:54,307] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:15:15.006+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:50 grafana | logger=sqlstore.transactions t=2024-03-10T23:14:17.741445443Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=3 code="database is locked" 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:54,307] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:50 policy-pap | {"source":"pap-004a69b5-0086-4406-9a7b-3ee3aaedaa3f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"ceb9152d-bcfc-4b6d-8a99-09a6db2bceb9","timestampMs":1710112514941,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:50 grafana | logger=sqlstore.transactions t=2024-03-10T23:14:17.752772783Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=4 code="database is locked" 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:54,307] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:15:15.007+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:50 grafana | logger=secret.migration t=2024-03-10T23:14:17.75530665Z level=error msg="Stopped secret migration service" service=*migrations.DataSourceSecretMigrationService reason="database is locked" 23:16:50 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:50 kafka | [2024-03-10 23:14:54,307] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:50 policy-pap | {"source":"pap-004a69b5-0086-4406-9a7b-3ee3aaedaa3f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"ceb9152d-bcfc-4b6d-8a99-09a6db2bceb9","timestampMs":1710112514941,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:50 grafana | logger=sqlstore.transactions t=2024-03-10T23:14:17.764130814Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:54,307] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:15:15.007+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:50 grafana | logger=sqlstore.transactions t=2024-03-10T23:14:17.775007837Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 23:16:50 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 23:16:50 kafka | [2024-03-10 23:14:54,307] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:15:15.007+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:50 grafana | logger=sqlstore.transactions t=2024-03-10T23:14:17.785959831Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 code="database is locked" 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:54,307] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:15:15.030+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:50 grafana | logger=sqlstore.transactions t=2024-03-10T23:14:17.798785139Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=3 code="database is locked" 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:54,307] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:50 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ce266375-bc26-4441-9602-22e77eadb97f","timestampMs":1710112515017,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup"} 23:16:50 grafana | logger=sqlstore.transactions t=2024-03-10T23:14:17.816721262Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=4 code="database is locked" 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:54,307] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:15:15.031+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:50 grafana | logger=plugin.signature.key_retriever t=2024-03-10T23:14:17.817468307Z level=error msg="Error downloading plugin manifest keys" error="set last updated: database is locked" 23:16:50 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 23:16:50 kafka | [2024-03-10 23:14:54,307] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:50 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ce266375-bc26-4441-9602-22e77eadb97f","timestampMs":1710112515017,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup"} 23:16:50 grafana | logger=provisioning.dashboard t=2024-03-10T23:14:17.880432488Z level=info msg="finished to provision dashboards" 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:54,307] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:15:15.032+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:50 grafana | logger=grafana-apiserver t=2024-03-10T23:14:18.158931273Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 23:16:50 policy-db-migrator | DROP TABLE statistics_sequence 23:16:50 kafka | [2024-03-10 23:14:54,307] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:15:15.033+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:50 grafana | logger=grafana-apiserver t=2024-03-10T23:14:18.159459113Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 23:16:50 policy-db-migrator | -------------- 23:16:50 kafka | [2024-03-10 23:14:54,307] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:50 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"ceb9152d-bcfc-4b6d-8a99-09a6db2bceb9","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"7a0ef3bf-5a26-46e2-90b4-a5c72084a05e","timestampMs":1710112515018,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:50 grafana | logger=infra.usagestats t=2024-03-10T23:15:32.555135786Z level=info msg="Usage stats are ready to report" 23:16:50 policy-db-migrator | 23:16:50 kafka | [2024-03-10 23:14:54,307] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:15:15.065+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:50 policy-db-migrator | policyadmin: OK: upgrade (1300) 23:16:50 kafka | [2024-03-10 23:14:54,308] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:50 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"ceb9152d-bcfc-4b6d-8a99-09a6db2bceb9","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"7a0ef3bf-5a26-46e2-90b4-a5c72084a05e","timestampMs":1710112515018,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:50 policy-db-migrator | name version 23:16:50 policy-pap | [2024-03-10T23:15:15.065+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpUpdate stopping 23:16:50 kafka | [2024-03-10 23:14:54,308] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:50 policy-db-migrator | policyadmin 1300 23:16:50 policy-pap | [2024-03-10T23:15:15.066+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id ceb9152d-bcfc-4b6d-8a99-09a6db2bceb9 23:16:50 kafka | [2024-03-10 23:14:54,308] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:50 policy-db-migrator | ID script operation from_version to_version tag success atTime 23:16:50 policy-pap | [2024-03-10T23:15:15.066+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpUpdate stopping enqueue 23:16:50 kafka | [2024-03-10 23:14:54,308] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:50 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:20 23:16:50 policy-pap | [2024-03-10T23:15:15.066+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpUpdate stopping timer 23:16:50 kafka | [2024-03-10 23:14:54,308] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:50 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:20 23:16:50 policy-pap | [2024-03-10T23:15:15.067+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=ceb9152d-bcfc-4b6d-8a99-09a6db2bceb9, expireMs=1710112544961] 23:16:50 kafka | [2024-03-10 23:14:54,308] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:50 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:20 23:16:50 policy-pap | [2024-03-10T23:15:15.067+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpUpdate stopping listener 23:16:50 kafka | [2024-03-10 23:14:54,308] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:50 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:20 23:16:50 policy-pap | [2024-03-10T23:15:15.067+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpUpdate stopped 23:16:50 kafka | [2024-03-10 23:14:54,308] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:50 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 policy-pap | [2024-03-10T23:15:15.077+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpUpdate successful 23:16:50 kafka | [2024-03-10 23:14:54,308] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:50 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 policy-pap | [2024-03-10T23:15:15.077+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d start publishing next request 23:16:50 kafka | [2024-03-10 23:14:54,308] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:50 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 policy-pap | [2024-03-10T23:15:15.077+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpStateChange starting 23:16:50 kafka | [2024-03-10 23:14:54,308] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:50 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 policy-pap | [2024-03-10T23:15:15.078+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpStateChange starting listener 23:16:50 kafka | [2024-03-10 23:14:54,308] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:50 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 policy-pap | [2024-03-10T23:15:15.078+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpStateChange starting timer 23:16:50 kafka | [2024-03-10 23:14:54,308] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:50 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 policy-pap | [2024-03-10T23:15:15.078+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=0b57fe72-7bb8-4bc8-9c3f-e3cf4f3a8e64, expireMs=1710112545078] 23:16:50 kafka | [2024-03-10 23:14:54,308] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:50 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 policy-pap | [2024-03-10T23:15:15.078+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpStateChange starting enqueue 23:16:50 kafka | [2024-03-10 23:14:54,308] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:50 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 policy-pap | [2024-03-10T23:15:15.078+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=0b57fe72-7bb8-4bc8-9c3f-e3cf4f3a8e64, expireMs=1710112545078] 23:16:50 kafka | [2024-03-10 23:14:54,308] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:50 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 policy-pap | [2024-03-10T23:15:15.078+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpStateChange started 23:16:50 kafka | [2024-03-10 23:14:54,308] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:50 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 policy-pap | [2024-03-10T23:15:15.079+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:50 kafka | [2024-03-10 23:14:54,308] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:50 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 kafka | [2024-03-10 23:14:54,308] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:50 policy-pap | {"source":"pap-004a69b5-0086-4406-9a7b-3ee3aaedaa3f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"0b57fe72-7bb8-4bc8-9c3f-e3cf4f3a8e64","timestampMs":1710112514942,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:50 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 kafka | [2024-03-10 23:14:54,308] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:15:15.097+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:50 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 kafka | [2024-03-10 23:14:54,308] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:50 policy-pap | {"source":"pap-004a69b5-0086-4406-9a7b-3ee3aaedaa3f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"0b57fe72-7bb8-4bc8-9c3f-e3cf4f3a8e64","timestampMs":1710112514942,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:50 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 kafka | [2024-03-10 23:14:54,308] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:50 policy-pap | [2024-03-10T23:15:15.098+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 23:16:50 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 kafka | [2024-03-10 23:14:54,310] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-pap | [2024-03-10T23:15:15.114+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:50 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 kafka | [2024-03-10 23:14:54,312] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"0b57fe72-7bb8-4bc8-9c3f-e3cf4f3a8e64","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"cfe006df-6f88-4cb5-babd-358e5ffa270f","timestampMs":1710112515100,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:50 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 kafka | [2024-03-10 23:14:54,314] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-pap | [2024-03-10T23:15:15.115+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 0b57fe72-7bb8-4bc8-9c3f-e3cf4f3a8e64 23:16:50 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 kafka | [2024-03-10 23:14:54,314] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-pap | [2024-03-10T23:15:15.128+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:50 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 kafka | [2024-03-10 23:14:54,314] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-pap | {"source":"pap-004a69b5-0086-4406-9a7b-3ee3aaedaa3f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"0b57fe72-7bb8-4bc8-9c3f-e3cf4f3a8e64","timestampMs":1710112514942,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:50 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 kafka | [2024-03-10 23:14:54,314] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-pap | [2024-03-10T23:15:15.128+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 23:16:50 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 kafka | [2024-03-10 23:14:54,314] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-pap | [2024-03-10T23:15:15.132+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:50 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 kafka | [2024-03-10 23:14:54,314] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"0b57fe72-7bb8-4bc8-9c3f-e3cf4f3a8e64","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"cfe006df-6f88-4cb5-babd-358e5ffa270f","timestampMs":1710112515100,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:50 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 kafka | [2024-03-10 23:14:54,314] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-pap | [2024-03-10T23:15:15.133+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpStateChange stopping 23:16:50 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 kafka | [2024-03-10 23:14:54,314] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-pap | [2024-03-10T23:15:15.133+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpStateChange stopping enqueue 23:16:50 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:21 23:16:50 kafka | [2024-03-10 23:14:54,314] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-pap | [2024-03-10T23:15:15.133+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpStateChange stopping timer 23:16:50 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:22 23:16:50 kafka | [2024-03-10 23:14:54,314] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-pap | [2024-03-10T23:15:15.133+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=0b57fe72-7bb8-4bc8-9c3f-e3cf4f3a8e64, expireMs=1710112545078] 23:16:50 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:22 23:16:50 kafka | [2024-03-10 23:14:54,314] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-pap | [2024-03-10T23:15:15.133+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpStateChange stopping listener 23:16:50 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:22 23:16:50 kafka | [2024-03-10 23:14:54,314] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-pap | [2024-03-10T23:15:15.133+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpStateChange stopped 23:16:50 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:22 23:16:50 kafka | [2024-03-10 23:14:54,314] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-pap | [2024-03-10T23:15:15.133+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpStateChange successful 23:16:50 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:22 23:16:50 kafka | [2024-03-10 23:14:54,314] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-pap | [2024-03-10T23:15:15.133+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d start publishing next request 23:16:50 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:22 23:16:50 kafka | [2024-03-10 23:14:54,315] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-pap | [2024-03-10T23:15:15.133+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpUpdate starting 23:16:50 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:22 23:16:50 kafka | [2024-03-10 23:14:54,315] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-pap | [2024-03-10T23:15:15.133+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpUpdate starting listener 23:16:50 policy-pap | [2024-03-10T23:15:15.133+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpUpdate starting timer 23:16:50 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:22 23:16:50 policy-pap | [2024-03-10T23:15:15.133+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=bd8174cb-de3f-4b45-85fe-9bded866eeba, expireMs=1710112545133] 23:16:50 kafka | [2024-03-10 23:14:54,315] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:22 23:16:50 policy-pap | [2024-03-10T23:15:15.133+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpUpdate starting enqueue 23:16:50 kafka | [2024-03-10 23:14:54,315] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:22 23:16:50 policy-pap | [2024-03-10T23:15:15.133+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpUpdate started 23:16:50 kafka | [2024-03-10 23:14:54,315] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:22 23:16:50 policy-pap | [2024-03-10T23:15:15.134+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:50 kafka | [2024-03-10 23:14:54,315] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,315] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-pap | {"source":"pap-004a69b5-0086-4406-9a7b-3ee3aaedaa3f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"bd8174cb-de3f-4b45-85fe-9bded866eeba","timestampMs":1710112515115,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:50 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:22 23:16:50 kafka | [2024-03-10 23:14:54,315] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-pap | [2024-03-10T23:15:15.150+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:50 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:22 23:16:50 kafka | [2024-03-10 23:14:54,315] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 kafka | [2024-03-10 23:14:54,315] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:22 23:16:50 kafka | [2024-03-10 23:14:54,315] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-pap | {"source":"pap-004a69b5-0086-4406-9a7b-3ee3aaedaa3f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"bd8174cb-de3f-4b45-85fe-9bded866eeba","timestampMs":1710112515115,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:50 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:22 23:16:50 kafka | [2024-03-10 23:14:54,315] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-pap | [2024-03-10T23:15:15.151+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:50 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:22 23:16:50 kafka | [2024-03-10 23:14:54,315] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-pap | {"source":"pap-004a69b5-0086-4406-9a7b-3ee3aaedaa3f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"bd8174cb-de3f-4b45-85fe-9bded866eeba","timestampMs":1710112515115,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:50 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:22 23:16:50 kafka | [2024-03-10 23:14:54,315] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-pap | [2024-03-10T23:15:15.151+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:50 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:22 23:16:50 kafka | [2024-03-10 23:14:54,315] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-pap | [2024-03-10T23:15:15.151+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:50 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:22 23:16:50 kafka | [2024-03-10 23:14:54,315] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-pap | [2024-03-10T23:15:15.160+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:50 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:22 23:16:50 kafka | [2024-03-10 23:14:54,315] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"bd8174cb-de3f-4b45-85fe-9bded866eeba","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"893be548-c876-4831-b932-0692ae4fde57","timestampMs":1710112515144,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:50 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:22 23:16:50 kafka | [2024-03-10 23:14:54,316] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-pap | [2024-03-10T23:15:15.161+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpUpdate stopping 23:16:50 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:22 23:16:50 kafka | [2024-03-10 23:14:54,316] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-pap | [2024-03-10T23:15:15.161+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpUpdate stopping enqueue 23:16:50 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:22 23:16:50 kafka | [2024-03-10 23:14:54,316] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-pap | [2024-03-10T23:15:15.161+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpUpdate stopping timer 23:16:50 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:23 23:16:50 kafka | [2024-03-10 23:14:54,316] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-pap | [2024-03-10T23:15:15.161+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=bd8174cb-de3f-4b45-85fe-9bded866eeba, expireMs=1710112545133] 23:16:50 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:23 23:16:50 kafka | [2024-03-10 23:14:54,316] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-pap | [2024-03-10T23:15:15.161+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpUpdate stopping listener 23:16:50 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:23 23:16:50 kafka | [2024-03-10 23:14:54,316] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:23 23:16:50 policy-pap | [2024-03-10T23:15:15.161+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpUpdate stopped 23:16:50 kafka | [2024-03-10 23:14:54,316] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:23 23:16:50 policy-pap | [2024-03-10T23:15:15.167+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d PdpUpdate successful 23:16:50 kafka | [2024-03-10 23:14:54,316] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:23 23:16:50 policy-pap | [2024-03-10T23:15:15.167+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-c1960a06-eefc-4ad3-973d-8384f929cb9d has no more requests 23:16:50 kafka | [2024-03-10 23:14:54,316] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:23 23:16:50 policy-pap | [2024-03-10T23:15:15.167+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:50 kafka | [2024-03-10 23:14:54,316] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:23 23:16:50 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"bd8174cb-de3f-4b45-85fe-9bded866eeba","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"893be548-c876-4831-b932-0692ae4fde57","timestampMs":1710112515144,"name":"apex-c1960a06-eefc-4ad3-973d-8384f929cb9d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:50 kafka | [2024-03-10 23:14:54,316] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:23 23:16:50 policy-pap | [2024-03-10T23:15:15.168+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id bd8174cb-de3f-4b45-85fe-9bded866eeba 23:16:50 kafka | [2024-03-10 23:14:54,316] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:23 23:16:50 policy-pap | [2024-03-10T23:15:18.802+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:50 kafka | [2024-03-10 23:14:54,316] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:23 23:16:50 policy-pap | [2024-03-10T23:15:18.812+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:50 kafka | [2024-03-10 23:14:54,316] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:23 23:16:50 policy-pap | [2024-03-10T23:15:19.254+00:00|INFO|SessionData|http-nio-6969-exec-6] unknown group testGroup 23:16:50 kafka | [2024-03-10 23:14:54,316] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:23 23:16:50 policy-pap | [2024-03-10T23:15:19.833+00:00|INFO|SessionData|http-nio-6969-exec-6] create cached group testGroup 23:16:50 kafka | [2024-03-10 23:14:54,316] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:23 23:16:50 policy-pap | [2024-03-10T23:15:19.834+00:00|INFO|SessionData|http-nio-6969-exec-6] creating DB group testGroup 23:16:50 kafka | [2024-03-10 23:14:54,316] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:23 23:16:50 policy-pap | [2024-03-10T23:15:20.441+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group testGroup 23:16:50 kafka | [2024-03-10 23:14:54,316] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:23 23:16:50 policy-pap | [2024-03-10T23:15:20.690+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy onap.restart.tca 1.0.0 23:16:50 kafka | [2024-03-10 23:14:54,316] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:23 23:16:50 policy-pap | [2024-03-10T23:15:20.788+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 23:16:50 kafka | [2024-03-10 23:14:54,317] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:23 23:16:50 policy-pap | [2024-03-10T23:15:20.788+00:00|INFO|SessionData|http-nio-6969-exec-10] update cached group testGroup 23:16:50 kafka | [2024-03-10 23:14:54,317] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:23 23:16:50 policy-pap | [2024-03-10T23:15:20.788+00:00|INFO|SessionData|http-nio-6969-exec-10] updating DB group testGroup 23:16:50 kafka | [2024-03-10 23:14:54,317] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:23 23:16:50 policy-pap | [2024-03-10T23:15:20.806+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-03-10T23:15:20Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-03-10T23:15:20Z, user=policyadmin)] 23:16:50 kafka | [2024-03-10 23:14:54,317] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:23 23:16:50 policy-pap | [2024-03-10T23:15:21.519+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group testGroup 23:16:50 kafka | [2024-03-10 23:14:54,317] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:23 23:16:50 policy-pap | [2024-03-10T23:15:21.521+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 23:16:50 kafka | [2024-03-10 23:14:54,317] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:23 23:16:50 policy-pap | [2024-03-10T23:15:21.521+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.restart.tca 1.0.0 23:16:50 kafka | [2024-03-10 23:14:54,317] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:24 23:16:50 policy-pap | [2024-03-10T23:15:21.521+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group testGroup 23:16:50 kafka | [2024-03-10 23:14:54,317] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:24 23:16:50 policy-pap | [2024-03-10T23:15:21.522+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group testGroup 23:16:50 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:24 23:16:50 policy-pap | [2024-03-10T23:15:21.540+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-03-10T23:15:21Z, user=policyadmin)] 23:16:50 kafka | [2024-03-10 23:14:54,317] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:24 23:16:50 policy-pap | [2024-03-10T23:15:21.908+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup 23:16:50 kafka | [2024-03-10 23:14:54,317] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:24 23:16:50 policy-pap | [2024-03-10T23:15:21.909+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 23:16:50 kafka | [2024-03-10 23:14:54,317] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:24 23:16:50 policy-pap | [2024-03-10T23:15:21.909+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 23:16:50 kafka | [2024-03-10 23:14:54,317] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:24 23:16:50 policy-pap | [2024-03-10T23:15:21.909+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 23:16:50 kafka | [2024-03-10 23:14:54,317] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:24 23:16:50 policy-pap | [2024-03-10T23:15:21.909+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 23:16:50 kafka | [2024-03-10 23:14:54,317] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:24 23:16:50 policy-pap | [2024-03-10T23:15:21.909+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 23:16:50 kafka | [2024-03-10 23:14:54,317] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:24 23:16:50 policy-pap | [2024-03-10T23:15:21.924+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-03-10T23:15:21Z, user=policyadmin)] 23:16:50 kafka | [2024-03-10 23:14:54,317] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:24 23:16:50 policy-pap | [2024-03-10T23:15:42.542+00:00|INFO|SessionData|http-nio-6969-exec-2] cache group testGroup 23:16:50 kafka | [2024-03-10 23:14:54,317] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:24 23:16:50 policy-pap | [2024-03-10T23:15:42.547+00:00|INFO|SessionData|http-nio-6969-exec-2] deleting DB group testGroup 23:16:50 kafka | [2024-03-10 23:14:54,318] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:24 23:16:50 policy-pap | [2024-03-10T23:15:44.962+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=ceb9152d-bcfc-4b6d-8a99-09a6db2bceb9, expireMs=1710112544961] 23:16:50 kafka | [2024-03-10 23:14:54,318] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:24 23:16:50 policy-pap | [2024-03-10T23:15:45.079+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=0b57fe72-7bb8-4bc8-9c3f-e3cf4f3a8e64, expireMs=1710112545078] 23:16:50 kafka | [2024-03-10 23:14:54,318] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:24 23:16:50 kafka | [2024-03-10 23:14:54,318] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:24 23:16:50 kafka | [2024-03-10 23:14:54,318] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:24 23:16:50 kafka | [2024-03-10 23:14:54,318] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:24 23:16:50 kafka | [2024-03-10 23:14:54,318] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:25 23:16:50 kafka | [2024-03-10 23:14:54,318] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:25 23:16:50 kafka | [2024-03-10 23:14:54,318] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1003242314200800u 1 2024-03-10 23:14:25 23:16:50 kafka | [2024-03-10 23:14:54,318] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 1003242314200900u 1 2024-03-10 23:14:25 23:16:50 kafka | [2024-03-10 23:14:54,318] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 1003242314200900u 1 2024-03-10 23:14:25 23:16:50 kafka | [2024-03-10 23:14:54,318] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 1003242314200900u 1 2024-03-10 23:14:25 23:16:50 kafka | [2024-03-10 23:14:54,318] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 1003242314200900u 1 2024-03-10 23:14:25 23:16:50 kafka | [2024-03-10 23:14:54,318] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 1003242314200900u 1 2024-03-10 23:14:25 23:16:50 kafka | [2024-03-10 23:14:54,318] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 1003242314200900u 1 2024-03-10 23:14:25 23:16:50 kafka | [2024-03-10 23:14:54,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1003242314200900u 1 2024-03-10 23:14:25 23:16:50 kafka | [2024-03-10 23:14:54,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1003242314200900u 1 2024-03-10 23:14:25 23:16:50 kafka | [2024-03-10 23:14:54,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1003242314200900u 1 2024-03-10 23:14:25 23:16:50 kafka | [2024-03-10 23:14:54,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 1003242314200900u 1 2024-03-10 23:14:25 23:16:50 kafka | [2024-03-10 23:14:54,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 1003242314200900u 1 2024-03-10 23:14:25 23:16:50 kafka | [2024-03-10 23:14:54,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 1003242314200900u 1 2024-03-10 23:14:25 23:16:50 kafka | [2024-03-10 23:14:54,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 1003242314200900u 1 2024-03-10 23:14:25 23:16:50 kafka | [2024-03-10 23:14:54,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 1003242314201000u 1 2024-03-10 23:14:25 23:16:50 kafka | [2024-03-10 23:14:54,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 1003242314201000u 1 2024-03-10 23:14:25 23:16:50 kafka | [2024-03-10 23:14:54,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 1003242314201000u 1 2024-03-10 23:14:25 23:16:50 kafka | [2024-03-10 23:14:54,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 1003242314201000u 1 2024-03-10 23:14:25 23:16:50 kafka | [2024-03-10 23:14:54,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 1003242314201000u 1 2024-03-10 23:14:25 23:16:50 kafka | [2024-03-10 23:14:54,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 1003242314201000u 1 2024-03-10 23:14:26 23:16:50 kafka | [2024-03-10 23:14:54,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 1003242314201000u 1 2024-03-10 23:14:26 23:16:50 kafka | [2024-03-10 23:14:54,319] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:50 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 1003242314201000u 1 2024-03-10 23:14:26 23:16:50 kafka | [2024-03-10 23:14:54,319] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 1003242314201000u 1 2024-03-10 23:14:26 23:16:50 kafka | [2024-03-10 23:14:54,319] INFO [Broker id=1] Finished LeaderAndIsr request in 555ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) 23:16:50 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 1003242314201100u 1 2024-03-10 23:14:26 23:16:50 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 1003242314201200u 1 2024-03-10 23:14:26 23:16:50 kafka | [2024-03-10 23:14:54,322] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=N4B-uQPVQqWvv830YyfZ-Q, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:50 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 1003242314201200u 1 2024-03-10 23:14:26 23:16:50 kafka | [2024-03-10 23:14:54,323] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 6 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 1003242314201200u 1 2024-03-10 23:14:26 23:16:50 kafka | [2024-03-10 23:14:54,324] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 1003242314201200u 1 2024-03-10 23:14:26 23:16:50 kafka | [2024-03-10 23:14:54,324] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 1003242314201300u 1 2024-03-10 23:14:26 23:16:50 kafka | [2024-03-10 23:14:54,324] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 1003242314201300u 1 2024-03-10 23:14:26 23:16:50 kafka | [2024-03-10 23:14:54,324] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 1003242314201300u 1 2024-03-10 23:14:26 23:16:50 kafka | [2024-03-10 23:14:54,324] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 policy-db-migrator | policyadmin: OK @ 1300 23:16:50 kafka | [2024-03-10 23:14:54,324] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,324] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,324] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,324] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,324] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,325] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,325] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,325] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,325] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,326] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,326] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,326] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,329] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,329] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,329] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,329] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,329] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,329] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,330] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,330] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,330] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,330] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,330] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,330] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,330] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,330] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,330] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,330] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,330] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,330] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,330] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,330] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,330] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,330] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,330] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,330] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,330] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,330] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,330] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,330] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,330] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,330] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,330] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,331] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,331] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,331] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,331] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,331] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,331] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,331] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,331] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,331] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,331] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,331] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,331] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,331] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,331] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,331] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,331] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,331] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,331] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,331] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,331] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,332] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,332] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,332] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,332] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,332] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,332] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,332] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,332] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,332] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,332] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,332] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,332] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,332] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,332] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,333] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,333] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,333] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,333] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,333] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,333] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,333] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,333] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,333] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,333] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,333] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,333] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:50 kafka | [2024-03-10 23:14:54,333] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,334] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,334] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,334] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,334] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,334] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,335] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:50 kafka | [2024-03-10 23:14:54,413] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 688a4207-1e5e-4290-924d-17e6019295ad in Empty state. Created a new member id consumer-688a4207-1e5e-4290-924d-17e6019295ad-3-7b8ba1bf-5716-4996-a24c-cc177b61a35a and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:50 kafka | [2024-03-10 23:14:54,423] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-cd681474-2f01-4073-b953-a2ec7226f594 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:50 kafka | [2024-03-10 23:14:54,433] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-cd681474-2f01-4073-b953-a2ec7226f594 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:50 kafka | [2024-03-10 23:14:54,434] INFO [GroupCoordinator 1]: Preparing to rebalance group 688a4207-1e5e-4290-924d-17e6019295ad in state PreparingRebalance with old generation 0 (__consumer_offsets-22) (reason: Adding new member consumer-688a4207-1e5e-4290-924d-17e6019295ad-3-7b8ba1bf-5716-4996-a24c-cc177b61a35a with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:50 kafka | [2024-03-10 23:14:55,208] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group cd2396fb-4c66-4451-a067-57142bc9537e in Empty state. Created a new member id consumer-cd2396fb-4c66-4451-a067-57142bc9537e-2-a841a5cf-26c3-4eef-82c0-f364852bf17c and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:50 kafka | [2024-03-10 23:14:55,212] INFO [GroupCoordinator 1]: Preparing to rebalance group cd2396fb-4c66-4451-a067-57142bc9537e in state PreparingRebalance with old generation 0 (__consumer_offsets-7) (reason: Adding new member consumer-cd2396fb-4c66-4451-a067-57142bc9537e-2-a841a5cf-26c3-4eef-82c0-f364852bf17c with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:50 kafka | [2024-03-10 23:14:57,456] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:50 kafka | [2024-03-10 23:14:57,464] INFO [GroupCoordinator 1]: Stabilized group 688a4207-1e5e-4290-924d-17e6019295ad generation 1 (__consumer_offsets-22) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:50 kafka | [2024-03-10 23:14:57,479] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-cd681474-2f01-4073-b953-a2ec7226f594 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:50 kafka | [2024-03-10 23:14:57,479] INFO [GroupCoordinator 1]: Assignment received from leader consumer-688a4207-1e5e-4290-924d-17e6019295ad-3-7b8ba1bf-5716-4996-a24c-cc177b61a35a for group 688a4207-1e5e-4290-924d-17e6019295ad for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:50 kafka | [2024-03-10 23:14:58,214] INFO [GroupCoordinator 1]: Stabilized group cd2396fb-4c66-4451-a067-57142bc9537e generation 1 (__consumer_offsets-7) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:50 kafka | [2024-03-10 23:14:58,229] INFO [GroupCoordinator 1]: Assignment received from leader consumer-cd2396fb-4c66-4451-a067-57142bc9537e-2-a841a5cf-26c3-4eef-82c0-f364852bf17c for group cd2396fb-4c66-4451-a067-57142bc9537e for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:50 ++ echo 'Tearing down containers...' 23:16:50 Tearing down containers... 23:16:50 ++ docker-compose down -v --remove-orphans 23:16:50 Stopping policy-apex-pdp ... 23:16:50 Stopping policy-pap ... 23:16:50 Stopping policy-api ... 23:16:50 Stopping grafana ... 23:16:50 Stopping kafka ... 23:16:50 Stopping compose_zookeeper_1 ... 23:16:50 Stopping mariadb ... 23:16:50 Stopping prometheus ... 23:16:50 Stopping simulator ... 23:16:51 Stopping grafana ... done 23:16:51 Stopping prometheus ... done 23:17:00 Stopping policy-apex-pdp ... done 23:17:11 Stopping simulator ... done 23:17:11 Stopping policy-pap ... done 23:17:12 Stopping mariadb ... done 23:17:12 Stopping kafka ... done 23:17:13 Stopping compose_zookeeper_1 ... done 23:17:21 Stopping policy-api ... done 23:17:21 Removing policy-apex-pdp ... 23:17:21 Removing policy-pap ... 23:17:21 Removing policy-api ... 23:17:21 Removing policy-db-migrator ... 23:17:21 Removing grafana ... 23:17:21 Removing kafka ... 23:17:21 Removing compose_zookeeper_1 ... 23:17:21 Removing mariadb ... 23:17:21 Removing prometheus ... 23:17:21 Removing simulator ... 23:17:21 Removing grafana ... done 23:17:21 Removing policy-apex-pdp ... done 23:17:21 Removing policy-pap ... done 23:17:21 Removing simulator ... done 23:17:22 Removing policy-db-migrator ... done 23:17:22 Removing kafka ... done 23:17:22 Removing compose_zookeeper_1 ... done 23:17:22 Removing prometheus ... done 23:17:22 Removing policy-api ... done 23:17:22 Removing mariadb ... done 23:17:22 Removing network compose_default 23:17:22 ++ cd /w/workspace/policy-pap-master-project-csit-pap 23:17:22 + load_set 23:17:22 + _setopts=hxB 23:17:22 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:17:22 ++ tr : ' ' 23:17:22 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:22 + set +o braceexpand 23:17:22 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:22 + set +o hashall 23:17:22 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:22 + set +o interactive-comments 23:17:22 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:22 + set +o xtrace 23:17:22 ++ echo hxB 23:17:22 ++ sed 's/./& /g' 23:17:22 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:22 + set +h 23:17:22 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:22 + set +x 23:17:22 + [[ -n /tmp/tmp.puIbOQCAlI ]] 23:17:22 + rsync -av /tmp/tmp.puIbOQCAlI/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:17:22 sending incremental file list 23:17:22 ./ 23:17:22 log.html 23:17:22 output.xml 23:17:22 report.html 23:17:22 testplan.txt 23:17:22 23:17:22 sent 918,685 bytes received 95 bytes 1,837,560.00 bytes/sec 23:17:22 total size is 918,144 speedup is 1.00 23:17:22 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:17:22 + exit 0 23:17:22 $ ssh-agent -k 23:17:22 unset SSH_AUTH_SOCK; 23:17:22 unset SSH_AGENT_PID; 23:17:22 echo Agent pid 2077 killed; 23:17:22 [ssh-agent] Stopped. 23:17:22 Robot results publisher started... 23:17:22 INFO: Checking test criticality is deprecated and will be dropped in a future release! 23:17:22 -Parsing output xml: 23:17:22 Done! 23:17:22 WARNING! Could not find file: **/log.html 23:17:22 WARNING! Could not find file: **/report.html 23:17:22 -Copying log files to build dir: 23:17:23 Done! 23:17:23 -Assigning results to build: 23:17:23 Done! 23:17:23 -Checking thresholds: 23:17:23 Done! 23:17:23 Done publishing Robot results. 23:17:23 [PostBuildScript] - [INFO] Executing post build scripts. 23:17:23 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins15652701387531778481.sh 23:17:23 ---> sysstat.sh 23:17:23 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins3174444822229441752.sh 23:17:23 ---> package-listing.sh 23:17:23 ++ facter osfamily 23:17:23 ++ tr '[:upper:]' '[:lower:]' 23:17:23 + OS_FAMILY=debian 23:17:23 + workspace=/w/workspace/policy-pap-master-project-csit-pap 23:17:23 + START_PACKAGES=/tmp/packages_start.txt 23:17:23 + END_PACKAGES=/tmp/packages_end.txt 23:17:23 + DIFF_PACKAGES=/tmp/packages_diff.txt 23:17:23 + PACKAGES=/tmp/packages_start.txt 23:17:23 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:23 + PACKAGES=/tmp/packages_end.txt 23:17:23 + case "${OS_FAMILY}" in 23:17:23 + dpkg -l 23:17:23 + grep '^ii' 23:17:23 + '[' -f /tmp/packages_start.txt ']' 23:17:23 + '[' -f /tmp/packages_end.txt ']' 23:17:23 + diff /tmp/packages_start.txt /tmp/packages_end.txt 23:17:23 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:23 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:23 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:23 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins11479186210589839237.sh 23:17:23 ---> capture-instance-metadata.sh 23:17:23 Setup pyenv: 23:17:23 system 23:17:23 3.8.13 23:17:23 3.9.13 23:17:23 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:23 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-UIf3 from file:/tmp/.os_lf_venv 23:17:25 lf-activate-venv(): INFO: Installing: lftools 23:17:35 lf-activate-venv(): INFO: Adding /tmp/venv-UIf3/bin to PATH 23:17:35 INFO: Running in OpenStack, capturing instance metadata 23:17:35 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17527508953852326363.sh 23:17:35 provisioning config files... 23:17:35 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config2787848965305973156tmp 23:17:35 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 23:17:35 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 23:17:35 [EnvInject] - Injecting environment variables from a build step. 23:17:35 [EnvInject] - Injecting as environment variables the properties content 23:17:35 SERVER_ID=logs 23:17:35 23:17:35 [EnvInject] - Variables injected successfully. 23:17:35 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2245820463965265181.sh 23:17:35 ---> create-netrc.sh 23:17:35 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4968735661210762607.sh 23:17:35 ---> python-tools-install.sh 23:17:35 Setup pyenv: 23:17:35 system 23:17:35 3.8.13 23:17:35 3.9.13 23:17:35 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:35 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-UIf3 from file:/tmp/.os_lf_venv 23:17:37 lf-activate-venv(): INFO: Installing: lftools 23:17:45 lf-activate-venv(): INFO: Adding /tmp/venv-UIf3/bin to PATH 23:17:45 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins13626247836806056844.sh 23:17:45 ---> sudo-logs.sh 23:17:45 Archiving 'sudo' log.. 23:17:45 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins16718339433580485208.sh 23:17:45 ---> job-cost.sh 23:17:45 Setup pyenv: 23:17:45 system 23:17:45 3.8.13 23:17:45 3.9.13 23:17:45 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:45 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-UIf3 from file:/tmp/.os_lf_venv 23:17:47 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 23:17:51 lf-activate-venv(): INFO: Adding /tmp/venv-UIf3/bin to PATH 23:17:51 INFO: No Stack... 23:17:52 INFO: Retrieving Pricing Info for: v3-standard-8 23:17:52 INFO: Archiving Costs 23:17:52 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins2275210205085697044.sh 23:17:52 ---> logs-deploy.sh 23:17:52 Setup pyenv: 23:17:52 system 23:17:52 3.8.13 23:17:52 3.9.13 23:17:52 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:52 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-UIf3 from file:/tmp/.os_lf_venv 23:17:54 lf-activate-venv(): INFO: Installing: lftools 23:18:02 lf-activate-venv(): INFO: Adding /tmp/venv-UIf3/bin to PATH 23:18:02 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1606 23:18:02 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 23:18:03 Archives upload complete. 23:18:03 INFO: archiving logs to Nexus 23:18:04 ---> uname -a: 23:18:04 Linux prd-ubuntu1804-docker-8c-8g-12398 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 23:18:04 23:18:04 23:18:04 ---> lscpu: 23:18:04 Architecture: x86_64 23:18:04 CPU op-mode(s): 32-bit, 64-bit 23:18:04 Byte Order: Little Endian 23:18:04 CPU(s): 8 23:18:04 On-line CPU(s) list: 0-7 23:18:04 Thread(s) per core: 1 23:18:04 Core(s) per socket: 1 23:18:04 Socket(s): 8 23:18:04 NUMA node(s): 1 23:18:04 Vendor ID: AuthenticAMD 23:18:04 CPU family: 23 23:18:04 Model: 49 23:18:04 Model name: AMD EPYC-Rome Processor 23:18:04 Stepping: 0 23:18:04 CPU MHz: 2799.998 23:18:04 BogoMIPS: 5599.99 23:18:04 Virtualization: AMD-V 23:18:04 Hypervisor vendor: KVM 23:18:04 Virtualization type: full 23:18:04 L1d cache: 32K 23:18:04 L1i cache: 32K 23:18:04 L2 cache: 512K 23:18:04 L3 cache: 16384K 23:18:04 NUMA node0 CPU(s): 0-7 23:18:04 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 23:18:04 23:18:04 23:18:04 ---> nproc: 23:18:04 8 23:18:04 23:18:04 23:18:04 ---> df -h: 23:18:04 Filesystem Size Used Avail Use% Mounted on 23:18:04 udev 16G 0 16G 0% /dev 23:18:04 tmpfs 3.2G 708K 3.2G 1% /run 23:18:04 /dev/vda1 155G 14G 142G 9% / 23:18:04 tmpfs 16G 0 16G 0% /dev/shm 23:18:04 tmpfs 5.0M 0 5.0M 0% /run/lock 23:18:04 tmpfs 16G 0 16G 0% /sys/fs/cgroup 23:18:04 /dev/vda15 105M 4.4M 100M 5% /boot/efi 23:18:04 tmpfs 3.2G 0 3.2G 0% /run/user/1001 23:18:04 23:18:04 23:18:04 ---> free -m: 23:18:04 total used free shared buff/cache available 23:18:04 Mem: 32167 826 25112 0 6227 30884 23:18:04 Swap: 1023 0 1023 23:18:04 23:18:04 23:18:04 ---> ip addr: 23:18:04 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 23:18:04 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 23:18:04 inet 127.0.0.1/8 scope host lo 23:18:04 valid_lft forever preferred_lft forever 23:18:04 inet6 ::1/128 scope host 23:18:04 valid_lft forever preferred_lft forever 23:18:04 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 23:18:04 link/ether fa:16:3e:4c:30:00 brd ff:ff:ff:ff:ff:ff 23:18:04 inet 10.30.106.194/23 brd 10.30.107.255 scope global dynamic ens3 23:18:04 valid_lft 85938sec preferred_lft 85938sec 23:18:04 inet6 fe80::f816:3eff:fe4c:3000/64 scope link 23:18:04 valid_lft forever preferred_lft forever 23:18:04 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 23:18:04 link/ether 02:42:d0:4e:1f:64 brd ff:ff:ff:ff:ff:ff 23:18:04 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 23:18:04 valid_lft forever preferred_lft forever 23:18:04 23:18:04 23:18:04 ---> sar -b -r -n DEV: 23:18:04 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-12398) 03/10/24 _x86_64_ (8 CPU) 23:18:04 23:18:04 23:10:23 LINUX RESTART (8 CPU) 23:18:04 23:18:04 23:11:02 tps rtps wtps bread/s bwrtn/s 23:18:04 23:12:01 104.58 24.98 79.60 1424.44 28419.59 23:18:04 23:13:01 126.75 23.01 103.73 2768.34 33558.54 23:18:04 23:14:01 231.59 0.23 231.36 20.53 133975.67 23:18:04 23:15:01 337.57 12.16 325.41 818.36 48367.86 23:18:04 23:16:01 19.58 0.00 19.58 0.00 21926.63 23:18:04 23:17:01 27.98 0.07 27.91 8.93 22911.30 23:18:04 23:18:01 72.60 1.93 70.67 111.98 15171.55 23:18:04 Average: 131.59 8.87 122.72 734.44 43511.92 23:18:04 23:18:04 23:11:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:18:04 23:12:01 30114772 31699404 2824448 8.57 69572 1825588 1467680 4.32 869316 1661512 152232 23:18:04 23:13:01 28946312 31690100 3992908 12.12 98740 2918844 1610188 4.74 964868 2656376 898304 23:18:04 23:14:01 25792412 31694108 7146808 21.70 140644 5886376 1470040 4.33 997216 5620900 493116 23:18:04 23:15:01 23337980 29406564 9601240 29.15 156760 6017656 9065144 26.67 3460384 5530236 1792 23:18:04 23:16:01 23493564 29562856 9445656 28.68 156936 6017940 8881948 26.13 3306036 5527788 192 23:18:04 23:17:01 23735912 29830860 9203308 27.94 157400 6046084 7277072 21.41 3061388 5541908 152 23:18:04 23:18:01 25676568 31586276 7262652 22.05 159360 5874148 1567600 4.61 1331448 5384452 1524 23:18:04 Average: 25871074 30781453 7068146 21.46 134202 4940948 4477096 13.17 1998665 4560453 221045 23:18:04 23:18:04 23:11:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:18:04 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:04 23:12:01 lo 1.63 1.63 0.18 0.18 0.00 0.00 0.00 0.00 23:18:04 23:12:01 ens3 57.40 38.86 885.88 7.27 0.00 0.00 0.00 0.00 23:18:04 23:13:01 br-ad0410737e5c 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:04 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:04 23:13:01 lo 6.13 6.13 0.57 0.57 0.00 0.00 0.00 0.00 23:18:04 23:13:01 ens3 163.06 110.10 4455.42 12.53 0.00 0.00 0.00 0.00 23:18:04 23:14:01 br-ad0410737e5c 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:04 23:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:04 23:14:01 lo 7.13 7.13 0.71 0.71 0.00 0.00 0.00 0.00 23:18:04 23:14:01 ens3 1004.32 487.07 27858.78 35.25 0.00 0.00 0.00 0.00 23:18:04 23:15:01 veth54acc65 76.37 92.15 41.93 23.24 0.00 0.00 0.00 0.00 23:18:04 23:15:01 vethc84431b 0.00 0.48 0.00 0.03 0.00 0.00 0.00 0.00 23:18:04 23:15:01 br-ad0410737e5c 1.53 1.43 0.90 1.81 0.00 0.00 0.00 0.00 23:18:04 23:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:04 23:16:01 veth54acc65 30.86 37.26 35.67 8.54 0.00 0.00 0.00 0.00 23:18:04 23:16:01 vethc84431b 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 23:18:04 23:16:01 br-ad0410737e5c 1.57 1.83 0.99 0.27 0.00 0.00 0.00 0.00 23:18:04 23:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:04 23:17:01 veth54acc65 0.20 0.35 0.09 0.06 0.00 0.00 0.00 0.00 23:18:04 23:17:01 vethc84431b 0.00 0.12 0.00 0.01 0.00 0.00 0.00 0.00 23:18:04 23:17:01 br-ad0410737e5c 1.30 1.55 0.10 0.14 0.00 0.00 0.00 0.00 23:18:04 23:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:04 23:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:04 23:18:01 lo 34.88 34.88 6.19 6.19 0.00 0.00 0.00 0.00 23:18:04 23:18:01 ens3 1648.53 909.28 34040.47 152.00 0.00 0.00 0.00 0.00 23:18:04 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:04 Average: lo 4.43 4.43 0.84 0.84 0.00 0.00 0.00 0.00 23:18:04 Average: ens3 189.25 101.42 4763.91 13.51 0.00 0.00 0.00 0.00 23:18:04 23:18:04 23:18:04 ---> sar -P ALL: 23:18:04 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-12398) 03/10/24 _x86_64_ (8 CPU) 23:18:04 23:18:04 23:10:23 LINUX RESTART (8 CPU) 23:18:04 23:18:04 23:11:02 CPU %user %nice %system %iowait %steal %idle 23:18:04 23:12:01 all 9.71 0.00 0.73 2.38 0.03 87.14 23:18:04 23:12:01 0 27.10 0.00 1.58 1.79 0.09 69.44 23:18:04 23:12:01 1 9.58 0.00 0.78 0.08 0.02 89.54 23:18:04 23:12:01 2 4.15 0.00 0.41 0.08 0.03 95.33 23:18:04 23:12:01 3 6.36 0.00 0.39 0.24 0.02 93.00 23:18:04 23:12:01 4 0.02 0.00 0.29 15.09 0.03 84.57 23:18:04 23:12:01 5 8.78 0.00 0.71 0.59 0.03 89.88 23:18:04 23:12:01 6 19.26 0.00 1.14 0.66 0.03 78.90 23:18:04 23:12:01 7 2.46 0.00 0.59 0.58 0.03 96.34 23:18:04 23:13:01 all 10.82 0.00 1.85 2.47 0.04 84.82 23:18:04 23:13:01 0 5.25 0.00 2.03 1.06 0.03 91.63 23:18:04 23:13:01 1 6.33 0.00 1.64 0.82 0.02 91.19 23:18:04 23:13:01 2 4.48 0.00 1.39 0.27 0.03 93.82 23:18:04 23:13:01 3 2.41 0.00 0.95 2.16 0.03 94.44 23:18:04 23:13:01 4 3.83 0.00 1.42 11.28 0.02 83.45 23:18:04 23:13:01 5 32.28 0.00 3.04 2.97 0.07 61.64 23:18:04 23:13:01 6 23.16 0.00 2.60 0.84 0.05 73.35 23:18:04 23:13:01 7 8.90 0.00 1.67 0.32 0.05 89.06 23:18:04 23:14:01 all 11.98 0.00 5.67 7.50 0.08 74.78 23:18:04 23:14:01 0 14.39 0.00 5.88 3.72 0.08 75.93 23:18:04 23:14:01 1 12.22 0.00 5.45 0.10 0.07 82.15 23:18:04 23:14:01 2 9.57 0.00 5.56 8.52 0.10 76.25 23:18:04 23:14:01 3 11.39 0.00 5.22 14.53 0.07 68.79 23:18:04 23:14:01 4 12.49 0.00 7.27 13.04 0.09 67.12 23:18:04 23:14:01 5 13.47 0.00 5.64 0.75 0.07 80.07 23:18:04 23:14:01 6 11.11 0.00 4.94 14.18 0.07 69.71 23:18:04 23:14:01 7 11.18 0.00 5.48 5.16 0.08 78.10 23:18:04 23:15:01 all 28.87 0.00 3.95 3.62 0.08 63.48 23:18:04 23:15:01 0 29.93 0.00 4.03 8.20 0.08 57.75 23:18:04 23:15:01 1 34.44 0.00 4.90 1.59 0.08 58.99 23:18:04 23:15:01 2 31.43 0.00 3.94 1.64 0.08 62.91 23:18:04 23:15:01 3 24.53 0.00 3.47 0.85 0.08 71.06 23:18:04 23:15:01 4 29.71 0.00 4.06 2.02 0.08 64.12 23:18:04 23:15:01 5 27.61 0.00 3.35 2.86 0.07 66.11 23:18:04 23:15:01 6 31.98 0.00 4.45 8.86 0.08 54.63 23:18:04 23:15:01 7 21.31 0.00 3.42 2.94 0.07 72.27 23:18:04 23:16:01 all 4.75 0.00 0.47 1.21 0.05 93.51 23:18:04 23:16:01 0 5.39 0.00 0.42 0.02 0.03 94.14 23:18:04 23:16:01 1 4.61 0.00 0.53 0.17 0.05 94.64 23:18:04 23:16:01 2 5.24 0.00 0.58 0.02 0.05 94.11 23:18:04 23:16:01 3 6.90 0.00 0.75 0.02 0.05 92.28 23:18:04 23:16:01 4 4.54 0.00 0.28 6.51 0.03 88.63 23:18:04 23:16:01 5 4.70 0.00 0.55 0.03 0.05 94.67 23:18:04 23:16:01 6 3.27 0.00 0.27 0.02 0.07 96.38 23:18:04 23:16:01 7 3.40 0.00 0.34 2.88 0.07 93.31 23:18:04 23:17:01 all 1.44 0.00 0.34 1.29 0.05 96.88 23:18:04 23:17:01 0 1.24 0.00 0.40 0.02 0.03 98.31 23:18:04 23:17:01 1 1.99 0.00 0.35 0.10 0.07 97.49 23:18:04 23:17:01 2 1.37 0.00 0.40 0.12 0.05 98.06 23:18:04 23:17:01 3 1.12 0.00 0.30 0.40 0.07 98.11 23:18:04 23:17:01 4 0.52 0.00 0.20 9.67 0.07 89.54 23:18:04 23:17:01 5 1.62 0.00 0.33 0.00 0.05 98.00 23:18:04 23:17:01 6 2.34 0.00 0.40 0.02 0.07 97.18 23:18:04 23:17:01 7 1.30 0.00 0.33 0.00 0.07 98.29 23:18:04 23:18:01 all 7.53 0.00 0.64 1.06 0.03 90.74 23:18:04 23:18:01 0 1.15 0.00 0.67 1.04 0.03 97.11 23:18:04 23:18:01 1 7.02 0.00 0.82 0.10 0.02 92.05 23:18:04 23:18:01 2 44.63 0.00 1.34 0.52 0.07 53.45 23:18:04 23:18:01 3 1.25 0.00 0.42 0.07 0.02 98.25 23:18:04 23:18:01 4 0.72 0.00 0.47 6.43 0.03 92.35 23:18:04 23:18:01 5 2.87 0.00 0.52 0.23 0.03 96.35 23:18:04 23:18:01 6 1.42 0.00 0.45 0.03 0.03 98.07 23:18:04 23:18:01 7 1.15 0.00 0.43 0.07 0.03 98.31 23:18:04 Average: all 10.71 0.00 1.94 2.78 0.05 84.51 23:18:04 Average: 0 12.02 0.00 2.14 2.26 0.06 83.53 23:18:04 Average: 1 10.87 0.00 2.06 0.42 0.05 86.60 23:18:04 Average: 2 14.42 0.00 1.94 1.58 0.06 81.99 23:18:04 Average: 3 7.70 0.00 1.64 2.60 0.05 88.02 23:18:04 Average: 4 7.39 0.00 1.99 9.13 0.05 81.44 23:18:04 Average: 5 13.03 0.00 2.01 1.06 0.05 83.84 23:18:04 Average: 6 13.18 0.00 2.02 3.49 0.06 81.25 23:18:04 Average: 7 7.10 0.00 1.75 1.70 0.06 89.40 23:18:04 23:18:04 23:18:04